Wednesday, May 8, 2019

The women men are attracted to in well-nourished populations have low body mass indices & small waist sizes combined with relatively large hips; men are attracted to signs of being nubile & nulliparous

Evidence supporting nubility and reproductive value as the key to human female physical attractiveness. William D. Lassek, Steven J. C. Gaulin. Evolution and Human Behavior, May 8 2019.

Abstract: Selection should favor mating preferences that increase the chooser's reproductive success. Many previous studies have shown that the women men find most attractive in well-nourished populations have low body mass indices (BMIs) and small waist sizes combined with relatively large hips, resulting in low waist-hip ratios (WHRs). A frequently proposed explanation for these preferences is that such women may have enhanced health and fertility; but extensive evidence contradicts this health-and-fertility explanation. An alternative view is that men are attracted to signs of nubility and high reproductive value , i.e., by indicators of physical and sexual maturity in young women who have not been pregnant. Here we provide evidence in support of the view that a small waist size together with a low WHR and BMI is a strong and reliable sign of nubility. Using U.S. data from large national health surveys, we show that WHR, waist/thigh, waist/stature, and BMI are all lower in the age group (15-19) in which women reach physical and sexual maturity, after which all of these anthropometric measures increase. We also show that a smaller waist, in conjunction with relatively larger hips or thighs, is strongly associated with nulligravidity and with higher blood levels of docosahexaenoic acid (DHA), a fatty acid that is probably limiting for infant brain development. Thus, a woman with the small waist and relatively large hips that men find attractive is very likely to be nubile and nulliparous, with maximal bodily stores of key reproductive resources.

Because differential reproductive success drives adaptive evolution, selection should favor males and females whose mating preferences maximize their numbers of reproductively successful offspring. Thusboth should be attracted to anthropometric traits reliably correlated with the ability of the opposite sex to contribute to this goal. In a landmark bookSymons (1979) argued that the female attributesmost likely to enhance male reproductive success are indicators of nubility and its associated high reproductive value (see also Andrews et al.,2017; Fessleret al, 2005; Marlowe, 1998; Sugiyama, 2005; Symons,1995), and the purpose of this paper is to test whether the availableevidence isconsistent with Symons’view. Such a test is needed because, subsequentto Symons’formulation,a competing hypothesis proposed that men are primarily attuned to indicators of current health and fertilityand that these are the female attributes indicated bythe low WHRs and BMIslinked with high attractiveness(Singh, 1993a; 1993b; Tovée et al., 1998). Theexistence of male preferences for low WHRs and BMIs hasbeen supported by many other studies in industrialized populations,where women are generally well-nourished.Butanyexplanation ofthemmust address whypreferred values are much lower than mean or modal values in typical young women. In a recent study (Lassek&Gaulin, 2016), the mean WHR of PlayboyPlaymates (0.68) was 2 standard deviations below the means for typical college undergraduates (0.74) and the mean WHR (0.55) of imaginary females chosen for maximal attractiveness from comics, cartoons, animatedfilms, graphic-novels, or video-games was 5 standard deviations below the undergraduate mean. Jessica Rabbit, the most popular imaginary female, has a WHR of 0.42. Preferred values of BMI are also in the negative tail of the actual female distribution: the mean BMI of Playmates (18.5) was 2 SD lower than the mean for college undergraduates (22.2). A recent study of 323female comic book characters from the Marvel universefound that the mean WHR was 0.60±0.07 and the modal BMI was 17; WHR was two SD lower in 34 characters(0.61) than in the actressesportraying themin films (0.72)(Burch & Johnsen, 2019).

1.1 Health and fertility as the basis for female attractiveness?

Singh (1993a, 1993b) suggested that men are attracted to low WHRs and BMIs because they are signs of enhanced female health and fertility, and this idea has been widely accepted (e.g., Grammeret al.,2003; Marloweet al.,2005; Pawlowski&Dunbar,2005; Singh &Singh 2011; Sugiyama, 2005; Tovée et al.,1998; Weeden & Sabini,2005). But thisargument seems inconsistent with the extremity of the preferred values(above). As a result of stabilizing selection, phenotypes associated with optimal health and fertility should, and do,lie at the center—not the extreme—of the female distribution. Given such stabilizing selectionon females, male preferences for traits associated with health and fertility should then target modal female values. Based on a review of a large number of relevant studies and on new analyses, it has been recently shown thatWHRs and BMIs in the negative tails of their distributions—the values rated most attractive in well-nourished populations—usually indicate poorerrather than enhanced health (Lassek&Gaulin,2018a) and lowerfertility (Lassek&Gaulin,2018b)(although lower BMI’sin younger primigravidasmay reduce risksof obstructed labor and hypertension). Given that the predictions of the health-and-fertility hypothesis are not well supported, the main goal of this article is to evaluate theprior hypothesis that maybetter explain why males in well-nourished populations prefer female phenotypes at the negative extreme of their distributions: an evolvedpreference for nubility (Symons 1979, 1995) and its demographic correlate, maximal reproductive value (Andrews et al.,2017; Fessler et al., 2005).

1.2 Nubility as the basis for female attractiveness?
Despite a lack of empirical support, the health-and-fertility hypothesis has largely eclipsed Donald Symons’s earlier proposal thatmen are attracted to nubility—to indicators of recent physical and sexual maturity in young nulligravidas (never pregnant women) (Fessler et al., 2005; Symons,1979;1995; Sugiyama, 2005). Symons defined the nubile phase as 3-5 years after menarche when young women are “just beginning ovulatorymenstrual cycles” but have not been pregnant (Symons.1995, p. 88). This corresponds to age 15-19 in well-nourished populations, but sexual maturityin some subsistence groupsmay be delayed (Ellis,2004; Eveleth&Tanner,1990; Symons 1979). Symons suggested that the female characteristics men find attractive—such as a low WHR—are indicatorsof nubility. And he stressedthat any preference for nubility inevitably contrasts with a preference for current fertility, because the teen years of peak nubility are a well-documented period of lowfertilitydue to a decreased frequency of ovulation,with 40-90% of cycles anovulatory, while maximum fertility is not reached until the mid to late 20’s (Apter,1980; Ashley-Montague,1939; Ellisonet al., 1987; Larsen&Yan,2000; Loucks,2006; Metcalf&Mackenzie,1980; Talwar, 1965; Weinsteinet al., 1990; Wesselink et al., 2017). Thus,if the nubility hypothesis is correct, the fertility hypothesis must be incorrect. Nubility is closely linked to a woman’s maximum reproductive value(RV), her age-specific expectation of future offspring, given the underlying fertility and mortality curves of her population (Fisher,1930). The peak of RV is defined by survival to sexual maturity with all reproductive resources intact. The age of peak RV depends in part on the average ages of menarche and marriage, but typically ranges from 14to 18 in human populations (Fisher,1930;Keyfitz&Caswell, 2005; Keyfitz&Flieger,1971). This corresponds to Symons’ age of nubility. Calculations of reproductive value in the !Kung (Daly & Wilson, 1988) and in South Africa (Bowles & Wilson, 2005) both found the peak age to be 15.Symons argued that the attractiveness of the nubile age group is supported by the finding that this is the age groupwhen marriage and first pregnancies typically occur in subsistence culturesdespite reduced fecundability.For example, in the Yanomamo(polygynous hunter-horticulturalists of southern Venezuela), menarche is typically at age 12, marriage at 14, and the first birth at 16 (Symons,1995). Asimilar relationship between nubility and first reproduction characterizesother subsistence populations(Table 1), wherethe mean age of first birth is typically under age 20 and averages 3.9±1.1years after menarche.Inpopulationswith access toeffectivecontraception,the onset of sexual activity may be a better indicator of nubility than the age of marriage or first birth, although multiple factors may influence these ages. In a 2005 survey of women in 46 countries, the average age of first intercourse ranged from 15 to 19 with a mean of 17.2 (Durex, 2009) and was 17.1 in a recent sample of American women (Finer,2007).

Prior studies suggest that attractive females exhibit the phenotypic correlates of nubility. In the Dobe !Kung, four photographed young women considered “beautiful girls” by the !Kung were all age 17 (Howell, 2000, p. 38).In samplesfrom more developed countries, youthful bodies are also considered attractive. In a study of males viewing nude photos of female infants, children (mean age 7.7), pubescent (mean 13.5), and young adult females (mean 23.1), both viewing duration and ratings of physical attractiveness were highest for pubescent females (Quinseyet al., 1996). Marlowe(1998) has suggested thatan evolved attractionto nubility explains men’s preference for relatively large, firm, non-sagging female breasts, and this view is supported by a study in the Hewa (Coe &Steadman, 1995). Of particular relevance are two studies that directly explore the relationship of attractiveness to age. A recent study using body scans with raters from 10 countries found that BMI was inversely related to both rated attractiveness and to estimated age (Wang et al., 2015). In another recent study, age estimated from neck-down photographs of females in bathing suits had a strong negative relationship with attractiveness and a strong positive relationship with WHR, BMI, and especially waist/stature ratio (Andrews et al., 2017).Symons (1995) suggests several adaptive reasons why selection might favor men preferringnubile females over older females who have higher current fertility: 1) A male who pairs with a nubile female is likely to have the maximum opportunity to sire her offspring during her subsequent most fecund years. A nubile woman is also 2) likely to have more living relatives to assist her than an older woman, and 3) to survive long enough for her children to be independent before her death. 4) A male choosing a nubile female avoids investing in children siredby other men and possible conflict with the mother (his mate) over allocation of her parental effort among his children and the children of her prior mates. By definition, a nubile woman is not investing time and energy in other men’s children because sheis nulliparous.Moreover, in a wide array of competitive situations, those who stake an early claim are likely to have an advantage over those who wait until the contested resource is fully available(e.g., lining up the day before concert ticketsgo on sale; Roth&Xing,1994). Thus,the men who were most strongly attracted to signs of nubility would minimize their chances of being shut out of reproductive opportunities. This dynamic would generate selection on men to seek commitment offemale reproductive potential at younger ages. In such an environment, males without a preference for signs of nubility would be at a disadvantage in mating competition, and those who preferred women at the age of peak fertility (in the mid to late 20s) would likely find few available mates. In subsistence cultures, post-nubile women are very likely to be married and to have children; they are usually either pregnant or nursing and so ovulate infrequently due to ongoing reproductive commitments (Marlowe,2005; Strassman, 1997;Symons,1995).

1.3 External signs of nubility
Following Symons (1979;1995), we consider a woman to be nubile when she has menstrual cycles, has attained maximal skeletal growth, is sexually mature based on Tanner stages (see below), but has not beenpregnant. Maximal skeletal growth and stature are usually attained two to three years after the onset of menstrual periods, the latter typically occurring at ages 12-13 in well nourished populations (Eveleth&Tanner,1990; Table 1). In a representative American sample, completed skeletal growth resulting in maximal stature was attained by age 15-16 (Hamill et al., 1973). The two widely-accepted indicators of female sexual maturity in postmenarcheal women are the attainment of 1)adult breast sizeandconfiguration of the areola and nipple, and 2) an adult pattern of pubic hair (Tanner,1962, Marshall&Tanner,1969). In a sample of 192 English female adolescents, the average age for attaining adult (stage 5) pubic hair was 14.4±1.1 and for adult (stage 5)breasts was 15.3±1.7. More recent samples show similar ages for attainment of breast and pubic hair maturity (Beunenet al., 2006). In other studies, puberty was judged complete by age 16-17 in American, Asian, and Swiss samples (Huanget al., 2004;Largo&Prader,1983;Leeet al., 1980), based on completed skeletal growth and presence of adult secondary sexual characteristics. The timing of these developmental markers supports Symons’ (1979) suggestion that nubility occurs 3 to 5 years after menarche. We will assessthe timing of these developmental indicators in a large U.S. sample.

Little attractiveness research has focused on these features of the developing female phenotype, butSingh (1993b) and Symons (1995) separately suggested that both a low BMIand a low WHR are also indicators of nubility. If so, this developmental pattern would explain the male preference for low BMI and low WHR in populations where both measures increase after the nubile period. Available evidence suggests threeways that low WHRs, BMIs, and small waists may indicate that young women in well-nourished populations are at peak nubility and reproductive value: 1) these measuresare lower in the nubile age group (where nubility is defined based on completed stature growth and attainment of Tanner stage5) than they are in older women, 2) they show that a woman is unlikely to have been pregnant, a requirement for nubility, and 3) they indicate that resources crucial for reproduction are maximal (untapped). Published evidence relevant to these points is reviewed immediately below,and we will offer newevidence that the anthropomorphic values associated with attractiveness and reproductive resources are most likely to occur in the 15-20 age group.

1.3.1 Low WHR and BMI as indicatorsof attainment of sexual maturity
WHR may be a particularly good indicator of nubility because evidence suggests that it reaches a minimum during the nubile period. During female puberty, typically occurringbetween theages 10-18, there is a marked increase in the amount of adipose tissue, e.g., from 12-15% to 25-26% of body weight (Boot et al., 1997, Lim et al., 2009; Taylor et al., 2010), a percentage of body fat far greater than that seen in most other mammals, including other primates (Pond, 1998; Pitts&Bullard, 1968). Under the influence of gonadal hormones, mostof this added adipose is deposited in the gluteofemoral depot (hips, buttocks, and thighs), a highly derived human traitthat may haveevolved to store rare fatty acids critical for the development of the large human brain (Lassek &Gaulin, 2006; 2007;2008). Thishormonally driven emphasis on gluteofemoral vs. abdominal fatstores lowers WHR, which decreases during childhood and early adolescence, reaches a minimum at ages 15 to 18, and then tends to increase (Al-Sendi et al., 2003; Bacopoulou et al., 2015; Casey et al., 1994; de Ridder et al., 1990;1992; Fredriks et al., 2005; Gillum,1999; Haas et al., 2011; Kelishadi et al., 2007; Martinez et al., 1994; Moreno et al., 2007; Taylor et al., 2010; Westrate et al., 1989). This developmental pattern supports the idea that a low WHR is a relatively conspicuous marker of nubility(in addition to other signs of sexual maturity which may be less readily assessable, such as menstruation, breast andpubic-hair development, and attainment of maximal stature).In well-nourished populationsBMIs are also lower in nubile adolescents than in older women. In a longitudinal study of American women that began in the 1930’s, the mean BMI increased from 16.7 kg/m2in early adolescence to 18.9 in late adolescence, 22.1 at age 30, 24.1 at age 40, and 26.1 at age 50 (Casey et al., 1994). Cross-sectional female samples show parallel age-related weight increases (highly correlated with BMI) (Abraham et al., 1979;Burke et al., 1996; Schutz et al., 2002; Stoudt et al., 1965). Controlling for social class and parity, age was a significant predictor of BMI in a large United Kingdom sample (Gulliford et al., 1992). In a study in which males estimated the age of femalefigures varying in BMI and WHR (Singh, 1995), they judged the figures with the lowest BMIs (15) to be the youngest,with an estimated age of 17-19.We will explore the relationship of WHR and BMI with age in a large American sample.However, in contrast to women in well-nourished groups, in subsistence populations women’s BMIs may peak at the age of nubility and subsequently decreasewith age and parity (see Lassek &Gaulin,2006). Notably, men in such populations often prefer the higher BMIs which in these cultures indicate nubility (Sherry &Marlowe 2007, Sugiyama,2005; Yu &Shepard,1998). Thus, a consistent preference for the BMIs most strongly associated with nubility could explain an apparent cross-cultural inconsistency in body-shape preferences,which is difficult to explain using the health-and-fertility hypothesis.

1.3.2 Low WHRs and BMIs and smaller waist sizes indicate a lower likelihood of previous pregnancy
An essential part of Symons’ (1979, 1995) definition of nubility (and the high reproductive value it represents) is the lack of any previous pregnancy(i.e., nulligravidity); nubile womenhaveattained physical and sexual maturity without yet expending any reproductive potential.Priorevidence suggests that a low WHR (or small waist size) is also a strong indicator of nulliparity (Bjorkelund et al., 1996; Gunderson et al., 2004; 2008; 2009; Lanska et al., 1985; Lassek &Gaulin,2006; Lewis et al., 1994; Luoto et al., 2011; Mansour et al., 2009; Rodrigues &Da Costa,2001; Smith et al., 1994; Tonkelaar et al., 1989; Wells et al., 2010). Similarly, a recent study (Butovskya et al., 2017) found a strong positive relationship between WHR and parity in seven traditional societies. Like WHR, BMI also increases with parity in wellnourished populations (Abrams et al., 2013; Bastianet al., 2005;Bobrowet al., 2013; Rodrigues &Da Costa,2001 Kochet al., 2008; Nenkoet al., 2009). Some studies have suggested that BMI may be more strongly related to parity than it is to age (Koch et al., 2008, Nenko et al., 2009), although this may be less true inolder women (Trikudanathanet al., 2013).We will explore the relationships of WHR and BMI to age and parity in a large American sample.In two studies of men’s perceptions, higher WHRs were judged to strongly increase the likelihood of a previous pregnancy (Andrewset al., 2017; Furnham &Reeves, 2006). Thus, anthropometric data suggest that a low WHR and BMI may indicate nulliparity,as well as a young age, and psychological data suggest that men interpret these female features as carrying this information.

1.4 Smaller WHRs and waist sizes indicate greater availability of reproductive resources
Because they have reached sexual maturity but have not yet been pregnant, nubile women should have maximum supplies of reproductive resources that are depleted by pregnancy and nursing, such as the omega-3 fatty acid docosahexaenoic acid (DHA). Many studies have shown that DHA is an important resource supporting neuro-cognitive development ininfants and children(Janssen & Killiaan, 2014; Joffre et al., 2014; Lassek &Gaulin,2014;2015), andDHA stored in adiposeisdepleted by successive pregnancies (Dutta-Roy,2000, Hanebutt et al., 2008; Hornstra,2000; Lassek &Gaulin,2006;2008; Min et al., 2000). Stores of DHA would likely have beenan increasingly important aspect of mate value in the homininlineage as itexperienced dramatic brain expansion.Most of the DHA used for fetal and infant brain development is stored in gluteofemoralfatuntil it is mobilized from this depot during pregnancy and nursing(Lassek &Gaulin, 2006; Rebuffe-Scrive et al., 1985;1987). Indeed,a low WHR is associated with higher circulating levels of DHA (Harris et al., 2012; Karlsson et al., 2006; Micallef et al., 2009; Wang et al., 2008), as isa smaller waist size and lower levels ofabdominal fat(Alsahari et al, 2017; Bender et al., 2014; Howe et al., 2014; Karlsson et al., 2006; Wagner et al., 2015). Thus, young women with smaller waists and WHRs are likely to have higher levels of DHA in their stored fat and so can provide more DHA to their children during pregnancy and nursing which may result in enhanced cognitive ability in their offspring. Consistent with thepossibilitythat female body shape reveals stored neuro-cognitive resources, in a large sample of American mothers those with lower WHRs had children who scored higher on cognitive tests (controlling for other relevant factors, including income and education variables) (Lassek &Gaulin,2008). Moreover, the children ofteenage mothers, at particular risk for cognitive deficits, scoredsignificantly better on cognitive tests when their mothers had lower WHRs. To further examine the reproductive role of the gluteofemoral depot,we will assess the relationship of the waist/thigh ratio to plasma levels of DHA.

Around 75 pct of the minimum wage increase in Hungary was paid by consumers and 25 pct by firm owners; disemployment effects were greater in industries where passing the wage costs to consumers is more difficult

Who Pays for the Minimum Wage? Péter Harasztosi, Attila Lindner. American Economic Review, forthcoming,

Abstract: This paper provides a comprehensive assessment of the margins along which firms responded to a large and persistent minimum wage increase in Hungary. We show that employment elasticities are negative but small even four years after the reform; that around 75 percent of the minimum wage increase was paid by consumers and 25 percent by firm owners; that firms responded to the minimum wage by substituting labor with capital; and that dis-employment effects were greater in industries where passing the wage costs to consumers is more difficult. We estimate a model with monopolistic competition to explain these findings.

Why "surprising"? --- The Economist: Global meat-eating is on the rise, bringing surprising benefits

All the year round The Economist behaves like the lawmaker, that never speaks without detracting from human knowledge. Now, benefits of eating meat that are surprising:

The way of more flesh
Global meat-eating is on the rise, bringing surprising benefits
The Economist, May 2nd 2019| BEIJING, DAKAR AND MUMBAI
As Africans get richer, they will eat more meat and live longer, healthier lives

THINGS WERE different 28 years ago, when Zhou Xueyu and her husband moved from the coastal province of Shandong to Beijing and began selling fresh pork. The Xinfadi agricultural market where they opened their stall was then a small outpost of the capital. Only at the busiest times of year, around holidays, might the couple sell more than 100kg of meat in a day. With China’s economic boom just beginning, pork was still a luxury for most people.

Ms Zhou now sells about two tonnes of meat a day. In between expert whacks of her heavy cleaver, she explains how her business has grown. She used to rely on a few suppliers in nearby provinces. Now the meat travels along China’s excellent motorway network from as far away as Heilongjiang, in the far north-east, and Sichuan, in the south-west. The Xinfadi market has changed, too. It is 100 times larger than when it opened in 1988, and now lies within Beijing, which has sprawled around it.

Between 1961 and 2013 the average Chinese person went from eating 4kg of meat a year to 62kg. Half of the world’s pork is eaten in the country. More liberal agricultural policies have allowed farms to produce more—in 1961 China was suffering under the awful experiment in collectivisation known as the “great leap forward”. But the main reason the Chinese are eating more meat is simply that they are wealthier.


In rich countries people go vegan for January and pour oat milk over their breakfast cereal. In the world as a whole, the trend is the other way. In the decade to 2017 global meat consumption rose by an average of 1.9% a year and fresh dairy consumption by 2.1%—both about twice as fast as population growth. Almost four-fifths of all agricultural land is dedicated to feeding livestock, if you count not just pasture but also cropland used to grow animal feed. Humans have bred so many animals for food that Earth’s mammalian biomass is thought to have quadrupled since the stone age (see chart).

Barring a big leap forward in laboratory-grown meat, this is likely to continue. The Food and Agriculture Organisation (FAO), an agency of the UN, estimates that the global number of ruminant livestock (that is, cattle, buffalo, sheep and goats) will rise from 4.1bn to 5.8bn between 2015 and 2050 under a business-as-usual scenario. The population of chickens is expected to grow even faster. The chicken is already by far the most common bird in the world, with about 23bn alive at the moment compared with 500m house sparrows.


Meanwhile the geography of meat-eating is changing. The countries that drove the global rise in the consumption of animal products over the past few decades are not the ones that will do so in future. Tastes in meat are changing, too. In some countries people are moving from pork or mutton to beef, whereas in others beef is giving way to chicken. These shifts from meat to meat and from country to country are just as important as the overall pattern of growth. They are also more cheering. On a planetary scale, the rise of meat- and dairy-eating is a giant environmental problem. Locally, however, it can be a boon.

Over the past few decades no animal has bulked up faster than the Chinese pig. Annual pork production in that country has grown more than 30-fold since the early 1960s, to 55m tonnes. It is mostly to feed the legions of porkers that China imports 100m tonnes of soybeans every year—two-thirds of trade in that commodity. It is largely through eating more pork and dairy that Chinese diets have come to resemble Western ones, rich in protein and fat. And it is mostly because their diets have altered that Chinese people have changed shape. The average 12-year-old urban boy was nine centimetres taller in 2010 than in 1985, the average girl seven centimetres taller. Boys in particular have also grown fatter.

China’s pork suppliers are swelling, too. Three-fifths of pigs already come from farms that produce more than 500 a year, and Wan Hongjian, vice-president of WH Group Ltd, China’s largest pork producer, thinks the proportion will rise. Disease is one reason. African swine fever, a viral disease fatal to pigs though harmless to people, has swept China and has led to the culling of about 1m hogs. The virus is tough, and can be eradicated only if farms maintain excellent hygiene. Bigger producers are likely to prove better at that.

High on the hog

Yet China’s pork companies are grabbing larger shares of a market that appears almost to have stopped growing. The OECD, a club of mostly rich countries, estimates that pork consumption in China has been more or less flat since 2014. It predicts growth of just under 1% a year over the next decade. If a country that eats so much of the stuff is indeed approaching peak pork, it hints at a big shift in global animal populations. Pigs will become a smaller presence on the global farm.

In 2015 animal products supplied 22% of the average Chinese person’s calorie intake, according to the FAO. That is only a shade below the average in rich countries (24%). “Unlike decades ago, there are no longer large chunks of the population out there that are not yet eating meat,” says Joel Haggard of the US Meat Export Federation, an industry group. And demography is beginning to prove a drag on demand. China’s population will start falling in about ten years’ time. The country is already ageing, which suppresses food consumption because old people eat less than young people do. UN demographers project that, between 2015 and 2050, the number of Chinese in their 20s will crash from 231m to 139m.

Besides, pork has strong competitors. “All over China there are people eating beef at McDonald’s and chicken at KFC,” says Mr Wan. Another fashion—hotpot restaurants where patrons cook meat in boiling pots of broth at the table—is boosting consumption of beef and lamb. Last year China overtook Brazil to become the world’s second-biggest beef market after America, according to the United States Department of Agriculture. Australia exports so much beef to China that the Global Times, a pugnacious state-owned newspaper, has suggested crimping the trade to punish Australia for various provocations.

The shift from pork to beef in the world’s most populous country is bad news for the environment. Because pigs require no pasture, and are efficient at converting feed into flesh, pork is among the greenest of meats. Cattle are usually much less efficient, although they can be farmed in different ways. And because cows are ruminants, they belch methane, a powerful greenhouse gas. A study of American farm data in 2014 estimated that, calorie for calorie, beef production requires three times as much animal feed as pork production and produces almost five times as much greenhouse gases. Other estimates suggest it uses two and a half times as much water.

Fortunately, even as the Chinese develop the taste for beef, Americans are losing it. Consumption per head peaked in 1976; around 1990 beef was overtaken by chicken as America’s favourite meat. Academics at Kansas State University linked that to the rise of women’s paid work. Between 1982 and 2007 a 1% increase in the female employment rate was associated with a 0.6% drop in demand for beef and a similar rise in demand for chicken. Perhaps working women think beef is more trouble to cook. Beef-eating has risen a little recently, probably because Americans are feeling wealthier. But chicken remains king.

Shifts like that are probably the most that can be expected in rich countries over the next few years. Despite eager predictions of a “second nutrition transition” to diets lower in meat and higher in grains and vegetables, Western diets are so far changing only in the details. Beef is a little less popular in some countries, but chicken is more so; people are drinking less milk but eating more cheese. The EU expects only a tiny decline in meat-eating, from 69.3kg per person to 68.7kg, between 2018 and 2030. Collectively, Europeans and Americans seem to desire neither more animal proteins nor fewer.

If the West is sated, and China is getting there, where is the growth coming from? One answer is India. Although Indians still eat astonishingly little meat—just 4kg a year—they are drinking far more milk, eating more cheese and cooking with more ghee (clarified butter) than before. In the 1970s India embarked on a top-down “white revolution” to match the green one. Dairy farmers were organised into co-operatives and encouraged to bring their milk to collection centres with refrigerated tanks. Milk production shot up from 20m tonnes in 1970 to 174m tonnes in 2018, making India the world’s biggest milk producer. The OECD expects India will produce 244m tonnes of milk in 2027.

All that dairy is both a source of national pride and a problem in a country governed by Hindu nationalists. Hindus hold cows to be sacred. Through laws, hectoring and “cow protection” squads, zealots have tried to prevent all Indians from eating beef or even exporting it to other countries. When cows grow too old to produce much milk, farmers are supposed to send them to bovine retirement homes. In fact, Indian dairy farmers seem to be ditching the holy cows for water buffalo. When these stop producing milk, they are killed and their rather stringy meat is eaten or exported. Much of it goes to Vietnam, then to China (often illegally, because of fears of foot-and-mouth disease).

But neither an Indian milk co-operative nor a large Chinese pig farm really represents the future of food. Look instead to a small, scruffy chicken farm just east of Dakar, the capital of Senegal. Some 2,000 birds squeeze into a simple concrete shed with large openings in the walls, which are covered with wire mesh. Though breezes blow through the building, the chickens’ droppings emit an ammoniac reek that clings to the nostrils. A few steps outside, the ground is brown with blood. Chickens have been stuffed into a makeshift apparatus of steel cones to protect their wings, and their necks cut with a knife.

Though it looks primitive, this represents a great advance over traditional west African farming methods. The chickens in the shed hardly resemble the variegated brown birds that can be seen pecking at the ground in any number of villages. They are commercial broilers—white creatures with big appetites that grow to 2kg in weight after just 35 days. All have been vaccinated against two widespread chicken-killers—Newcastle disease and infectious bursal disease. A vet, Mamadou Diouf, checks on them regularly (and chastises the farmers for killing too close to the shed). Mr Diouf says that when he started working in the district, in 2013, many farmers refused to let him in.

Official statistics suggest that the number of chickens in Senegal has increased from 24m to 60m since 2000. As people move from villages to cities, they have less time to make traditional stews—which might involve fish, mutton or beef as well as vegetables and spices, and are delicious. Instead they eat in cafés, or buy food that they can cook quickly. By the roads into Dakar posters advertise “le poulet prêt à cuire”, wrapped in plastic. Broiler farms are so productive that supermarket chickens are not just convenient but cheap.

Economic vegetarians

Many sub-Saharan Africans still eat almost no meat, dairy or fish. The FAO estimates that just 7% of people’s dietary energy comes from animal products, one-third of the proportion in China. This is seldom the result of religious or cultural prohibitions. If animal foods were cheaper, or if people had more money, they would eat more of them. Richard Waite of the World Resources Institute, an American think-tank, points out that when Africans move to rich countries and open restaurants, they tend to write meat-heavy menus.

Yet this frugal continent is beginning to sway the global food system. The UN thinks that the population of sub-Saharan Africa will reach 2bn in the mid-2040s, up from 1.1bn today. That would lead to a huge increase in meat- and dairy-eating even if people’s diets stayed the same. But they will not. The population of Kenya has grown by 58% since 2000, while the output of beef has more than doubled.

Africa already imports more meat each year than does China, and the OECD’s forecasters expect imports to keep growing by more than 3% a year. But most of the continent’s meat will probably be home-grown. The FAO predicts that in 2050 almost two out of every five ruminant livestock animals in the world will be African. The number of chickens in Africa is projected to quadruple, to 7bn.

This will strain the environment. Although African broilers and battery hens are more or less as productive as chickens anywhere, African cattle are the world’s feeblest. Not only are they poorly fed and seldom visited by vets; in many areas they are treated more as stores of wealth than producers of food. Africa has 23% of the world’s cattle but produces 10% of the world’s beef and just 5% of its milk.

Lorenzo Bellù of the FAO points out that herders routinely encroach on national parks and private lands in east Africa. He finds it hard to imagine that the continent’s hunger for meat will be supplied entirely by making farming more efficient. Almost certainly, much forest will be cut down. Other consequences will be global. Sub-Saharan Africans currently have tiny carbon footprints because they use so little energy—excluding South Africa, the entire continent produces about as much electricity as France. The armies of cattle, goats and sheep will raise Africans’ collective contribution to global climate change, though not to near Western or Chinese levels.

The low-productivity horns of Africa

People will probably become healthier, though. Many African children are stunted (notably small for their age) partly because they do not get enough micronutrients such as Vitamin A. Iron deficiency is startlingly common. In Senegal a health survey in 2017 found that 42% of young children and 14% of women are moderately or severely anaemic. Poor nutrition stunts brains as well as bodies.

Animal products are excellent sources of essential vitamins and minerals. Studies in several developing countries have shown that giving milk to schoolchildren makes them taller. Recent research in rural western Kenya found that children who regularly ate eggs grew 5% faster than children who did not; cow’s milk had a smaller effect. But meat—or, rather, animals—can be dangerous, too. In Africa chickens are often allowed to run in and out of people’s homes. Their eggs and flesh seem to improve human health; their droppings do not. One study of Ghana finds that childhood anaemia is more common in chicken-owning households, perhaps because the nippers caught more diseases.

Africans’ changing diets also create opportunities for local businesses. As cities grow, and as people in those cities demand more animal protein, national supply chains become bigger and more sophisticated. Animal breeders, hatcheries, vets and trucking companies multiply. People stop feeding kitchen scraps to animals and start using commercial feed. In Nigeria the amount of maize used for animal-feed shot up from 300,000 tonnes to 1.8m tonnes between 2003 and 2015.

You can see this on the outskirts of Dakar—indeed, the building is so big that you can hardly miss it. NMA Sanders, a feed-mill, turned out some 140,000 tonnes of chicken feed last year, up from 122,000 the year before, according to its director of quality, Cheikh Alioune Konaté. The warehouse floor is piled high with raw ingredients: maize from Morocco, Egypt and Brazil; soya cake from Mali; fishmeal from local suppliers. The mill has created many jobs, from the labourers who fill bags with pelleted feed to the technicians who run the computer system, and managers like Mr Konaté. Lorries come and go.

It is often said that sub-Saharan Africa lacks an industrial base, and this is true. Just one car in every 85 is made in Africa, according to the International Organisation of Motor Vehicle Manufacturers. But to look only for high-tech, export-oriented industries risks overlooking the continent’s increasingly sophisticated food-producers, who are responding to urban demand. Ideally, Africa would learn to fill shipping containers with clothes and gadgets. For now, there are some jobs to be had filling bellies with meat.

This article appeared in the International section of the print edition under the headline "A meaty planet"

When choosing among an overabundance of alternatives, participants express more positive feelings (i.e., higher satisfaction/confidence, lower regret & difficulty) if all the options of the choice set are associated with familiar brands

The Role of the Brand on Choice Overload. Raffaella Misuraca. Mind & Society, May 8 2019.

Abstract: Current research on choice overload has been mainly conducted with choice options not associated with specific brands. This study investigates whether the presence of brand names in the choice set affects the occurrence of choice overload. Across four studies, we find that when choosing among an overabundance of alternatives, participants express more positive feelings (i.e., higher satisfaction/confidence, lower regret and difficulty) when all the options of the choice set are associated with familiar brands, rather than unfamiliar brands or no brand at all. We also find that choice overload only appears in the absence of brand names, but disappears when all options contain brand names—either familiar or unfamiliar. Theoretical and practical implications are discussed.

Keywords: Choice overload Brand Consumer decisions Decision-making

Retrofitting the 29 mn UK homes would cost £4.3 tn; if the energy bill of £2000 per year were to be halved, savings would be £29 bn/year; payback time would be 150 years

Decarbonisation and the Command Economy. Michael Kelly. GWPF, May 8 2019.

The costs of retrofitting exiting domestic buildings to improve energy efficiency and reduce CO2 emissions, compared with the savings on energy bills, represent a wholly unsatisfactory return on investment from a family perspective. A command economy would be required to make any serious inroads on the challenge as proposed by the Committee on Climate Change.

In its recent (February 2019) report, ‘UK Housing: Fit for the Future?’, the 29 million existing homes must be made low-carbon, low-energy and resilient to climate change. This note is an abbreviated update of a study[1] I prepared subsequent to a three-year appointment as Chief Scientific Adviser to the Department for Communities and Local Government during 2006–9. I also delivered an ‘amateur’ prospectus to the Council, University, Business and Entrepreneurial sectors of the City of Cambridge, with an estimated bill of £0.7–1 billion to retrofit the 49,000 houses and 5500 other buildings within the city boundaries to halve the net CO2 emissions.

On the basis of a presentation I made to the then Science Minister, Lord Drayson, in 2008, the Government launched a pilot ‘Retrofit for the Future’ programme, with £150,000 devoted to over 100 houses in the housing association sector. This programme, and its outcomes[2], did not rate a mention in the recent CCC report. However, I have visited one of these, and seen a 60% (the target was 80%) reduction in CO2 emissions after the retrofit: full wall insulation, underfloor insulation, use of the newest appliances etc. At this rate of spend, the 29 million existing homes across the UK would cost £4.3 trillion to retrofit. If the typical energy bill of £2000 per year were to be halved, the saving would be £29 billion per year and the payback time would be 150 years! Who would lend/invest on that basis?

In fact, the £150,000 limit was set to ensure that the end target of 80% CO2 emissions could be met[3], on the understanding that economies of scale and learning by doing would reduce the cost per household by at most 3–5-fold. However, how much reduction in cost is required before private individuals would invest in improving the energy efficiency of their home? This would be limited by the conditions set by lenders, and they want a payback of 3-4 years on most investments, stretching to say 7-8 years on infrastructure investments in the home. The implied ceiling of lending of £10,000 per house goes nowhere on energy efficiency measures and would not give a 50%, let alone 80%, energy reduction.

Only if there is a Government direction to spend this scale of money on this issue will any significant inroads be made in energy reductions in existing houses. No political party would commit to this level of spend on a national retrofit programme until the need is pressing and urgent, not on a distant horizon. There is no ducking or diving from this conclusion.

The progress since the 2010 CCC report on housing[4] is nugatory, and a third report will be rewritten again in 10 years, with similar pleas.

Michael Kelly is Prince Philip Professor of Technology (emeritus) at the University of Cambridge and a former chief scientist at the Department of Communities and Local Government.

[1] See original article at the link above

[2] Rajat Gupta, Matt Gregg, Stephen Passmore & Geoffrey Stevens:, ‘Intent and outcomes from the Retrofit for the Future programme: key lessons’, Building Research & Information, 43:4, 435-451, 2015. DOI: 10.1080/09613218.2015.1024042. See .

[3] . Note that only 3 of 45 projects, where the full data was available, actually met the 80% reduction target.


Ray of hope: Hopelessness Increases Preferences for Brighter Lighting

Francis, G., & Thunell, E. (2019). Excess Success in “Ray of hope: Hopelessness Increases Preferences for Brighter Lighting”. Collabra: Psychology, 5(1), 22. DOI:

Abstract: Dong, Huang, and Zhong (2015) report five successful experiments linking brightness perception with the feeling of hopelessness. They argue that a gloomy future is psychologically represented as darkness, not just metaphorically but as an actual perceptual bias. Based on multiple results, they conclude that people who feel hopeless perceive their environment as darker and therefore prefer brighter lighting than controls. Reversely, dim lighting caused participants to feel more hopeless. However, the experiments succeed at a rate much higher than predicted by the magnitude of the reported effects. Based on the reported statistics, the estimated probability of all five experiments being fully successful, if replicated with the same sample sizes, is less than 0.016. This low rate suggests that the original findings are (perhaps unintentionally) the result of questionable research practices or publication bias. Readers should therefore be skeptical about the original results and conclusions. Finally, we discuss how to design future studies to investigate the relationship between hopelessness and brightness.

Keywords: Excess success ,   publication bias ,   brightness perception ,   perceptual bias ,   statistics 

Differences in how men and women describe their traits are typically larger in highly gender egalitarian cultures; replicated in one of the largest number of cultures yet investigated—58 nations of the ISDP-2 Project

Why Sometimes a Man is more like a Woman. David P Schmitt. Chapter 12 of In Praise of An Inquisitive Mind. Anu Realo, Ed. Univ. of Tartu Press, 2019.

Among his many achievements, Jüri Allik and his colleagues were among the first to document a cross-cultural “gender paradox” in people’s self-reported personality traits. Namely, differences in how men and women describe their traits are typically larger and more conspicuous in highly gender egalitarian cultures (e.g., across Scandinavia where women and men experience more similar gender roles, sex role socialization, and sociopolitical gender equity) compared to less gender egalitarian cultures (e.g., across Africa or South/Southeast Asia). It is my honor to celebrate Jüri Allik’s sterling career with this chapter on sex differences in personality traits across one of the largest number of cultures yet investigated—58 nations of the International Sexuality Description Project-2 (ISDP-2). In this dataset, the gender paradoxical findings were replicated, with sex differences in Big Five personality traits being demonstrably larger in more gender egalitarian cultures. In our current era of most findings from classic psychological science failing to replicate, this successful replication serves as a testament to Jüri Allik’s status as among the most rigorous and prescient scientists within the field of personality psychology.

Politically incorrect paper: Given that sex egalitarian countries tend to have the greatest sex differences in personality & occupational choices, sex specific policies (increasing vacancies for the sex with lower hire proportion) may not be effective:
Sex and Care: The Evolutionary Psychological Explanations for Sex Differences in Formal Care Occupations. Peter Kay Chai Tay, Yi Yuan Ting and Kok Yang Tan. Front. Psychol., April 17 2019.

2002-2016: Binge drinking decreased substantially among US adolescents across time, age, gender, and race/ethnicity; alcohol abstention increased among US adolescents over the past 15 years

Trends in binge drinking and alcohol abstention among adolescents in the US, 2002-2016. Trenette Clark Goings et al. Drug and Alcohol Dependence, May 8 2019.

•    Binge drinking decreased substantially among US adolescents across time
•    Binge drinking decreased across age, gender, and race/ethnicity
•    Alcohol abstention increased among US adolescents over the past 15 years

Background: Binge drinking accounts for several adverse health, social, legal, and academic outcomes among adolescents. Understanding trends and correlates of binge drinking and alcohol abstention has important implications for policy and programs and was the aim of this study. The current study examined trends in adolescent binge drinking and alcohol abstention by age, gender, and race/ethnicity over a 15-year period.

Methods: Respondents between the ages of 12 and 17 years who participated in the National Survey on Drug Use and Health (NSDUH) between 2002 and 2016 were included in the sample of 258,309. Measures included binge drinking, alcohol abstention, and co-morbid factors (e.g., marijuana, other illicit drugs), and demographic factors.

Results: Logistic regression analyses were conducted to examine the significance of trend changes by sub-groups while controlling for co-morbid and demographic factors. Findings indicated that binge drinking decreased substantially among adolescents in the US over the last 15 years. This decrease was shown among all age, gender, and racial/ethnic groups. In 2002, Year 1 of the study, 26% of 17-year-olds reported past-month binge drinking; in 2016, past-month binge drinking dropped to 12%. Findings also indicated comparable increases in the proportion of youth reporting abstention from alcohol consumption across all subgroups. Black youth reported substantially lower levels of binge alcohol use and higher levels of abstention, although the gap between Black, Hispanic and White youth narrowed substantially between 2002 and 2016.

Conclusion: Study findings are consistent with those of other research showing declines in problem alcohol- use behavior among youth.

Voice of Authority: Professionals Lower Their Vocal Frequencies When Giving Expert Advice

Voice of Authority: Professionals Lower Their Vocal Frequencies When Giving Expert Advice. Piotr Sorokowski et al. Journal of Nonverbal Behavior, May 7 2019.

Abstract: Acoustic analysis and playback studies have greatly advanced our understanding of between-individual differences in nonverbal communication. Yet, researchers have only recently begun to investigate within-individual variation in the voice, particularly how people modulate key vocal parameters across various social contexts, with most of this research focusing on mating contexts. Here, we investigated whether men and women modulate the frequency components of their voices in a professional context, and how this voice modulation affects listeners’ assessments of the speakers’ competence and authority. Research assistants engaged scientists working as faculty members at various universities in two types of speech conditions: (1) Control speech, wherein the subjects were asked how to get to the administrative offices on that given campus; and (2) Authority speech, wherein the same subjects were asked to provide commentary for a radio program for young scholars titled, “How to become a scientist, and is it worth it?”. Our results show that male (n = 27) and female (n = 24) faculty members lowered their mean voice pitch (measured as fundamental frequency, F0) and vocal tract resonances (measured as formant position, Pf) when asked to provide their expert opinion compared to when giving directions. Notably, women lowered their mean voice pitch more than did men (by 33 Hz vs. 14 Hz) when giving expert advice. The results of a playback experiment further indicated that foreign-speaking listeners judged the voices of faculty members as relatively more competent and more authoritative based on authority speech than control speech, indicating that the observed nonverbal voice modulation effectively altered listeners’ perceptions. Our results support the prediction that people modulate their voices in social contexts in ways that are likely to elicit favorable social appraisals.

Keywords: Authority Fundamental frequency Voice pitch Formant frequencies Voice modulation

Spouses' Faces Are Similar but Do Not Become More Similar with Time

Tea-mangkornpan, Pin Pin, and Michal Kosinski. 2019. “Spouses' Faces Are Similar but Do Not Become More Similar with Time.” PsyArXiv. May 8. doi:10.31234/

Abstract: The convergence in physical appearance hypothesis posits that long-term partners’ faces become more similar with time as a function of the shared environment, diet, and synchronized facial expressions. While this hypothesis has been widely disseminated in psychological literature, it is supported by a single study of 12 married couples. Here, we examine this hypothesis using the facial images of 517 couples taken at the beginning of their marriage and 20 or more years later. Their facial similarity is estimated using two independent methods: human judgments and a facial recognition algorithm. The results show that while spouses’ faces tend to be similar at marriage, they do not converge over time. In fact, they become slightly less similar. These findings bring facial appearance in line with other personal characteristics—such as personality, intelligence, interests, attitudes, values, and well-being—through which spouses show initial similarity but no convergence over time.