Saturday, January 8, 2022

The evolution of extraordinary self-sacrifice

The evolution of extraordinary self-sacrifice. D. B. Krupp & Wes Maciejewski. Scientific Reports volume 12, Article number: 90. Jan 7 2022. https://www.nature.com/articles/s41598-021-04192-w

Abstract: From a theoretical perspective, individuals are expected to sacrifice their welfare only when the benefits outweigh the costs. In nature, however, the costs of altruism and spite can be extreme, as in cases of irreversible sterility and self-destructive weaponry. Here we show that “extraordinary” self-sacrifice—in which actors pay costs that exceed the benefits they give or the costs they impose on recipients—can evolve in structured populations, where social actions bring secondary benefits to neighboring kin. When given information about dispersal, sedentary actors evolve extraordinary altruism towards dispersing kin. Likewise, when given information about dispersal and kinship, sedentary actors evolve extraordinary spite towards sedentary nonkin. Our results can thus be summed up by a simple rule: extraordinary self-sacrifice evolves when the actor’s neighbors are close kin and the recipient’s neighbors are not.

Discussion

We find that individuals can evolve to value others’ fitness more than their own. Specifically, selection favors extraordinary altruism when sedentary actors interact with dispersing kin (Fig. 3f,l), and it favors extraordinary spite when sedentary actors interact with sedentary nonkin (Fig. 3o). Because extraordinary self-sacrifice entails C>|B|, the sum of the effects on the actor and recipient is always negative (BC<0), leading overall to a secondary decrease in competition that can benefit kin. Under limited dispersal, the actor’s neighboring kin benefit secondarily when the actor remains on the natal island. Likewise, the actor’s neighboring kin benefit secondarily from spite when the recipient remains on the actor’s natal island. Finally, in the case of altruism, it is nonkin that pay the price when the recipient arrives on their island. Taken together, we arrive at a simple rule: extraordinary self-sacrifice evolves when an actor’s neighbors are close kin and the recipient’s neighbors are not.

The effects of dispersal and kinship among actors, recipients, and neighbors also become apparent when we consider the conditions of our model that fail to favor the evolution of extraordinary self-sacrifice, even when dispersal is limited (d0) and neighborhood consanguinity is high (q1). First, extraordinary self-sacrifice in general does not evolve with a dispersing actor (Fig. 3b,d,e,h–k), because, by dispersing to a new island, the actor gives the secondary benefit of its sacrifice (in the form of reduced competition) to neighbors who are not kin and who therefore bear rival alleles. Second, extraordinary altruism does not evolve with a recipient that is not kin (Fig. 3i,k,m), because this provides a primary benefit to a recipient bearing a rival allele. Third, extraordinary altruism does not evolve with a sedentary recipient (Fig. 3a,c,e,g,j,k,n), because, by remaining on the natal island, the recipient imposes a secondary cost (in the form of increased competition) on neighbors who are the actor’s kin and who therefore bear copies of the focal allele. Fourth, extraordinary spite does not evolve with a recipient that is likely or known to be kin (Fig. 3a–e,g,h,j,n), because this imposes a primary cost on a recipient bearing a copy of the focal allele. Finally, extraordinary spite does not evolve with a dispersing recipient (Fig. 3d,h,i,m), for the same reason that it does not evolve with a dispersing actor: because, by dispersing to a new island, the recipient gives the secondary benefit of the spiteful action (in the form of reduced competition) to neighbors who are not the actor’s kin and who therefore bear rival alleles.

We are aware of two other models that report conditions under which extraordinary self-sacrifice can evolve. The first, by Krupp and Taylor23, was briefly discussed above. It also used an inclusive fitness approach set in an island structure, wherein actors could use a signal matching mechanism to distinguish between “native” individuals, whose parents were born on the focal island, and “migrant” individuals, whose parents were born elsewhere. Although actors had no information about dispersal status in their model, dispersal was generally assumed to be rare (d0), causing native actors to be close kin with their neighbors and causing both actors and recipients to be sedentary. Given the close parallels between these conditions and our own (represented in Fig. 3o), it is no surprise that they found that extraordinary spite can evolve among native actors interacting with migrant recipients. Our model extends their analysis, separating the effects of actor and recipient dispersal and making them explicit.

The second model, by McAvoy et al.33, used a game theoretic approach set in a heterogeneous social network of N individuals, each of whom plays either a “producer” strategy that pays a cost to give a benefit or a “non-producer” strategy that pays no cost and gives no benefit. (Because their approach differs significantly from our own, we have changed their notation and description to better correspond to ours.) One set of games entailed proportional benefits but fixed costs (“pf goods”), wherein a new benefit is given to each connected recipient without additional cost to the actor. Thus, if actor i is connnected to ni recipients, then in games with pf goods, i pays ci>0 only once to give a benefit bi>0 to each of the ni recipients. McAvoy et al. found that ci>bi can evolve in games with pf goods when there are more connections among individuals in the network than there are individuals themselves. However, this implies that C<B, because the net benefit B=bini grows with the number of connections whereas the net cost C=ci does not. Consequently, these results do not meet the definition of extraordinary self-sacrifice.

Another set of games in the McAvoy et al. model entailed fixed benefits and fixed costs (“ff goods”), wherein the benefit is divided equally among all connected recipients. Thus, in games with ff goods, the actor pays ci only once to give a benefit bi/ni to each of the ni recipients. McAvoy et al. found that ci>bi can evolve in games with ff goods within “rich-club” networks consisting of a central group of m individuals who are connected to each other as well as to a peripheral group of l individuals who are connected only to the members of the central group. Under these circumstances, the producer strategy works well for the central group but poorly for the peripheral group; nevertheless, the peripheral group evolves to play the producer strategy. We suspect, however, that this exploitative state of affairs is maintained by a peculiarity of the updating mechanisms of the model, which require individuals to imitate the strategy of better-performing connections, even if it is to their detriment. By playing the producer strategy, the central group causes the peripheral group to play the producer strategy as well: central producers have higher payoffs than peripheral non-producers, so peripheral non-producers must update their strategy to produce—despite the fact that it leaves them worse off—because they are connected strictly to better-performing central producers. As the authors show, the central group benefits greatly from this arrangement, particularly as the size of the peripheral group increases, while the peripheral group suffers losses. On the one hand, this implies that C<B for the central group, because the initial cost ci of the producer strategy to central individuals is more than repaid by the benefits bil/m it receives in return for causing peripheral individuals to produce as well; that is, the net cost C=cibil/m to a central producer is negative, meaning that it is actually a benefit. On the other hand, this also implies that C>B for the peripheral group, because the net cost to a peripheral producer is C=ci and the net benefit it gives is B=bi. Thus, production at the periphery would seem to meet the definition of extraordinary self-sacrifice. We wonder, then, if selection would still favor extraordinary self-sacrifice under these conditions if individuals were not powerless to play the strategy that worked best for them, irrespective of the strategy played by their connections.

Hamilton4 initially proposed that limited dispersal (in the form of “viscous” populations) would foster the evolution of altruism, because it would give kin the opportunity to interact. Conversely, he suggested that spite would most likely evolve in “dwindling panmictic species”5. With the benefit of hindsight, however, we can see that both claims are in need of refinement. From an inclusive fitness perspective, a cost to the actor must be compensated by a benefit to kin, being either the recipient or a neighbor. But limited dispersal also puts these parties in competition with one another, turning altruistic benefits to the former into costs to the latter9,11,13,34. As we find here, however, the dilemma of limited dispersal is resolved if the recipient—but not the actor or neighboring kin—can be expected to disperse after the interaction, providing a primary benefit to a consanguineous recipient and imposing a secondary cost on nonkin elsewhere (much as predicted by [9]). Indeed, the high degree of kinship that limited dispersal brings to a neighborhood is essential to the evolution of extraordinary altruism.

Likewise, spite profits not from panmixia but from population structure, because the primary costs to both the actor and the recipient are returned as secondary benefits to the actor’s neighboring kin. However, since limited dispersal increases the chances that individuals interact with kin, actors cannot simply be spiteful to anyone3. Rather, actors should discriminate along genealogical lines and, although they may not be strictly necessary, kin recognition mechanisms can be helpful in this regard.

Our model makes use of learned kin recognition systems, which are widespread in nature35,36,37,38,39. To the extent that kin recognition can operate via other routes, however, our results are not limited to organisms capable of cognition. While there are known theoretical obstacles to the evolution of genetic kin recognition40,41, for example, systems such as this have been identified and characterized in several species (e.g.42,43,44). Indeed, allorecognition is common, predates multicellularity, and has independently evolved numerous times45.

Notably, our study departs from previous theoretical work on the evolution of self-sacrifice under dispersal and kinship, largely because we ask not only whether individuals might discriminate as a function of kinship but also as a function of both actor and recipient dispersal (see also [22]). Plausibly, sedentary and dispersing types can evolve different degrees of self-interest, such that sedentary individuals generally give more than their dispersing counterparts. This should occur when the actions of sedentary individuals are systematically “funnelled” towards dispersing individuals as a function of organismal physiology or species ecology. In some cases, actors may even cause recipients to develop a dispersing phenotype—for example, by influencing caste determination46.

However, our findings also suggest the possibility that, alongside mechanisms of kin recognition, species may have evolved adaptations to estimate the spatial scale of competition47,48,49, such as mechanisms of dispersal recognition. That is, organisms might identify cues of their probability of future competition with social partners or neighboring kin and discriminate accordingly. Certainly, the eusocial insects already provide ample evidence that sedentary and dispersing individuals behave differently. For example, workers of most such species act altruistically (or, in some cases, spitefully) and are sedentary whereas reproductives act selfishly and disperse to found new colonies19. Yet, it is unclear whether some mechanism of competition estimation is the cause of such differences. If so, we might expect that individuals can predict the probability of partner dispersal by cues of future dispersal status, such as by chemical signal (e.g.50), location within the colony, presence of wings, or body size.

Though this is a knottier problem than we can address here, our results also suggest that aspects of multicellular and colony evolution, such as the division of labor between cell lines and self/nonself discrimination, are a consequence of dispersal patterns and their attendant secondary effects. Sedentary cells may sacrifice themselves to assist dispersing cells in reproducing elsewhere, as can be seen in the social amoeba Dictyostelium discoideum which, when starved, aggregates with kin to form a sterile stalk and a reproductive fruiting body51. Interestingly, individuals that starve earlier are more likely than those that starve later to become spores52. This presents the possibility that signals produced by early starvers to aggregate are attended to because they predict that these same individuals will disperse and compete elsewhere—signals that may be kept honest by virtue of the fact that starvation itself imposes pressure to disperse to find new food sources.

Moreover, sedentary individuals may serve as soldiers or enforcers, ensuring the integrity of the body or colony. For instance, the ascidian Botryllus schlosseri operates under limited dispersal, fusing with kin to create a colony with a shared vasculature53. However, when individuals encounter nonkin, they produce an immune response that causes damage at the interaction site42,54 which, arguably, is a spiteful response to a foreign competitor. Likewise, clones of the polyembryonic parasitoid wasp Copidosoma floridanum develop into two distinct castes: soldiers and reproductives. Soldiers grow quickly, spitefully attacking unrelated competitors with specialized mandibles and dying in the host body, whereas reproductives grow more slowly, eventually dispersing to parasitize new hosts55,56.

Of course, extraordinary self-sacrifice may evolve more or less easily in the wild than our model suggests. For instance, beyond the assumptions that actors use information about kinship and dispersal, we also assumed diminishing returns of the actor’s behavior, which can make the evolution of extraordinary self-sacrifice more difficult than might some other kinds of cost-benefit relationship. While there are many cases of social systems with diminishing returns32, it is possible that some interactions yield linear or even accelerating returns, improving the conditions for extraordinary self-sacrifice. Likewise, whether social goods entail proportional or fixed costs and benefits33 may also affect the ease with which extraordinary self-sacrifice evolves.

Even if each of the assumptions made here is met, other factors (such as sexual reproduction) may reduce consanguinity within the neighborhood, working against the evolution of extraordinary self-sacrifice. More generally, extraordinary self-sacrifice may cause significant but rare evolutionary events. This is because the conditions required to support it are themselves likely to be rare, as evidenced by the many scenarios of our model (represented in Fig. 3a–e,g–k,m,n) in which extraordinary self-sacrifice is not evolutionarily stable. Thus, while our model has been productive in demonstrating when and where extraordinary self-sacrifice might arise, further work establishing its prevalence, both theoretical and empirical, is certainly needed. In particular, complementary approaches, such as direct fitness and evolutionary game theoretic methods, may reveal further insights and applications.

Some people are just naturally better at comparing—matching—different visual patterns (faces, firearms, fingerprints) side by side

Match me if you can: Evidence for a domain-general visual comparison ability. Bethany Growns, James D. Dunn, Erwin J. A. T. Mattijssen, Adele Quigley-McBride & Alice Towler. Psychonomic Bulletin & Review, Jan 7 2022. https://link.springer.com/article/10.3758/s13423-021-02044-2

Abstract: Visual comparison—comparing visual stimuli (e.g., fingerprints) side by side and determining whether they originate from the same or different source (i.e., “match”)—is a complex discrimination task involving many cognitive and perceptual processes. Despite the real-world consequences of this task, which is often conducted by forensic scientists, little is understood about the psychological processes underpinning this ability. There are substantial individual differences in visual comparison accuracy amongst both professionals and novices. The source of this variation is unknown, but may reflect a domain-general and naturally varying perceptual ability. Here, we investigate this by comparing individual differences (N = 248 across two studies) in four visual comparison domains: faces, fingerprints, firearms, and artificial prints. Accuracy on all comparison tasks was significantly correlated and accounted for a substantial portion of variance (e.g., 42% in Exp. 1) in performance across all tasks. Importantly, this relationship cannot be attributed to participants’ intrinsic motivation or skill in other visual-perceptual tasks (visual search and visual statistical learning). This paper provides novel evidence of a reliable, domain-general visual comparison ability.

General discussion

Across two experiments, we explored whether there is a generalizable and domain-general perceptual skill underpinning the ability to compare—or “match”—different visual stimuli. Participants’ sensitivity in four different comparison tasks were all significantly correlated with each other, and a substantial portion of variance (41.99% in Experiment 1 and 34.92% in Experiment 2) across all tasks was accounted for by one shared “matching” component in both experiments. Together, these results support the conclusion that individual differences in visual comparison accuracy are explained by a shared ability that generalizes across a range of visual stimuli. Notably, intrinsic motivation (Experiment 1), visual search and visual statistical learning (Experiment 2) did not significantly correlate with sensitivity in any comparison task and loaded onto separate components that accounted for large proportions of the variance across all tasks (20.95% in Experiment 1 and 19.07% in Experiment 2). This suggests that individual differences in visual comparison cannot be attributed to individual differences in intrinsic motivation or other visual-perceptual tasks.

Importantly, our study also provides evidence of stimulus-specific individual differences. This is reflected in the moderate correlations seen between sensitivity in all comparison tasks across both experiments, and the principal components analysis, where additional components featured loadings from just one or a subset of comparison tasks. This suggests there are also likely individual stimulus-specific skills where some people are better at comparing specific stimuli over other stimuli. Overall, our results are the first to suggest that visual comparison is an interplay between an overarching generalizable comparison ability, as well as individual stimulus-specific ability.

This stimulus-specific skill may be partially attributed to stimulus familiarity and experience. Face-comparison performance—the most familiar stimuli—demonstrated the highest stimulus-specific variance: face-comparison sensitivity had the lowest average correlation with all other tasks (r = .267 in Experiment 1 and .289 in Experiment 2); and accounted for the third to fourth-largest portion of variation (16.37% in Experiment 1 and 11.55% in Experiment 2) across all tasks. In contrast to faces, fingerprint, firearms and artificial-print sensitivity accounted for less variance in our data—where familiarity with these stimuli ranges from unfamiliar to entirely novel. This is consistent with research that suggests there is a shift from domain-general to domain-specific mechanisms with increased perceptual experience in a domain (Chang & Gauthier, 2020, 2021; Sunday et al., 2018; Wong et al., 2014; Wong & Gauthier, 2010, 2012), and research that links experience and visual comparison performance (Thompson & Tangen, 2014).

Our results highlight visual comparison as a natural and generalizable ability that varies in the general population—yet the precise mechanisms underpinning this skill are only beginning to be explored (see Growns & Martire, 2020b, for review). It is possible that holistic processing—or the ability to view images as a ‘whole’ rather than a collection of features (Maurer et al., 2002)—underpins visual comparison performance: both facial and fingerprint examiners show evidence of holistic processing when viewing domain-specific stimuli (Busey & Vanderkolk, 2005; Towler, White, & Kemp, 2017b; Vogelsang et al., 2017). In contrast, featural processing—or the ability to view images as separate features—is also important in visual comparison. Professional performance is improved when examiners have an opportunity to engage featural processing: both facial and fingerprint examiners demonstrate greater performance gains than novices in domain-specific visual comparison tasks (Thompson et al., 2014; Towler, White, & Kemp, 2017; White, Phillips, et al., 2015). Novices’ face-comparison performance also correlates with featural processing tasks such as the NAVON and figure-matching tasks (Burton et al., 2010; McCaffery et al., 2018), and novices’ comparison performance is improved by instructing participants to rate or label features (Searston & Tangen, 2017c; Towler, White, & Kemp, 2017b). Low-performing novices also derive greater benefit from featural comparison training than high-performers—suggesting high-performers may already use such strategies (Towler, Keshwa, et al., 2021b). The role of holistic and featural processing in visual comparison performance remains an important avenue for future research.

These results have important applied implications. Whilst empirically based training for existing examiners is important to improve ongoing professional performance (Growns & Martire, 2020a), our results suggest that larger gains in performance could be achieved by selecting trainee examiners based on visual comparison ability. A similar approach has been used in applied domains: recruiting individuals with superior face recognition improves performance in real-world face identification tasks (Robertson et al., 2016; White, Dunn, et al., 2015). Professional performance in other forensic feature-comparison disciplines could likely be similarly improved by recruiting individuals with superior performance on a test battery of visual comparison tasks. Importantly, our results do not suggest that examiners would benefit from practicing outside of their primary domain of experience. Despite identifying a generalizable visual comparison ability, we also identified individual differences in stimulus-specific skills that suggest part of accurate visual comparison performance is domain specific.

As the participants in this study were untrained novices, it is unclear whether these results could generalize to practicing professionals. While investigating individual differences in the general population requires a novice sample, it is entirely plausible that a domain-general visual comparison mechanism may be diminished or negated for experts in this task as expertise is typically conceptualized as narrow and domain-specific (Charness et al., 2005; Ericsson, 2007, 2014). However, emerging evidence suggests domain-specific expertise may lend advantages to domain-general skill. For example, although facial examiners outperform fingerprint examiners in face comparison (i.e., facial examiners’ domain-specific expertise), fingerprint examiners outperform novices in the same task—despite it being outside their primary area of expertise (Phillips et al., 2018). Whether this domain-general advantage is developed alongside domain-specific expertise or is the result of preexisting individual differences in this ability will be an important avenue for future research.

This study provided the first evidence of a generalizable ability to underpinning the ability to compare or “match” different, complex visual stimuli. We demonstrated that the ability to compare stimuli such as faces, fingerprints, firearms, and artificial prints is in part due to a generalizable and domain-general ability—although subject to stimulus-specific constraints. These results have important theoretical and applied implications for both behavioural and forensic science. Importantly, test batteries of visual comparison tasks could be used to identify and recruit top-performing individuals to improve performance in forensic feature-comparison disciplines.

The net effect of traditional media on well-being is similar to that of social media: too close to zero to be perceived by users, or to have practical significance for people’s lived experience

No effect of different types of media on well-being. Niklas Johannes, Tobias Dienlin, Hasan Bakhshi & Andrew K. Przybylski. Scientific Reports volume 12, Article number: 61. Jan 6 2022. https://www.nature.com/articles/s41598-021-03218-7

Abstract: It is often assumed that traditional forms of media such as books enhance well-being, whereas new media do not. However, we lack evidence for such claims and media research is mainly focused on how much time people spend with a medium, but not whether someone used a medium or not. We explored the effect of media use during one week on well-being at the end of the week, differentiating time spent with a medium and use versus nonuse, over a wide range of different media types: music, TV, films, video games, (e-)books, (digital) magazines, and audiobooks. Results from a six-week longitudinal study representative of the UK population 16 years and older (N = 2159) showed that effects were generally small; between-person relations but rarely within-person effects; mostly for use versus nonuse and not time spent with a medium; and on affective well-being, not life satisfaction.


Discussion

New media like social networking sites allegedly exert an almost addictive effect on their users, whereas traditional media like books are considered a beneficial pastime. However, the alleged benefits of traditional media remain speculative without much evidence of their effects on well-being. We set out to deliver initial evidence of the broad, ‘net’ effect of a range of traditional media. First, we investigated media effects across a wide range of seven traditional media. Second, in a reciprocal analysis we separated within-person effects from between-person relations. Third, we treated use versus nonuse and time spent with a medium as different processes. Last, we analyzed data with a shorter time lag than most previous work, testing which facets of well-being are affected most by media use. Our findings provide little cause for alarm: Almost all differences were between users and nonusers on a stable between-person level, with small to negligible within-person effects in either direction. The few effects we found were comparable across media and largely on the (more volatile) affective well-being, rather than more stable life satisfaction.

Distinguishing use versus nonuse and time spent with a medium proved important. Most differences we observed were on the between-person level between users and nonusers. Likewise, the few small within-person effects incompatible with zero as the true effect occurred when a person went from not using a medium in one week to using a medium the next week. The time spent with a medium played a negligible role. In other words, our findings are not in line with the dominant linear dose–response model that (often implicitly) assumes that going from zero use to one minute of use has the same effect as going from one hour of use to one hour and one minute of use30,31. Instead, the decision to use a medium appears to represent a threshold; once a user crosses that threshold, the amount of time they spend with a medium is of little consequence for their well-being.

This conclusion almost exclusively applies to the between-person level: Media users (i.e., those who have crossed the threshold) in general feel slightly worse than nonusers (i.e., those who have not crossed the threshold). However, those differences were around a third of a point on an eleven-point scale. Such an effect is likely too small have practical significance for people’s lived experience45,46,47. On the within-person level, going from nonuse to use had generally small effects across media. The effects of time spent with a medium were even smaller. Our results speak against pronounced causal effects—neither positive nor negative—of media use during the week on well-being by the end of the week. The pattern of small between-person relations but negligible within-person effects aligns with previous research on new media8,9,23.

There were no substantial differences across the seven traditional media types we studied. (E-) book and (digital) magazine readers as well as audiobooks listeners did not experience less affective well-being unlike those engaging with music, TV, films, and games. That finding applies in both directions: Those with lower well-being were more likely to engage with these media. However, those differences all but disappeared on the within-person level, with most effect sizes close to a null effect. Only TV and music use versus nonuse on the within-level showed a small positive effect on affect. Together, the results stand in contrast to public opinion, where traditional media are valued highly1,48. It appears the broad, net effect of traditional media is similar to that of social media: too close to zero to be perceived by media users45.

Our study also addresses the choice of time lag and well-being indicator. Media effects are typically small49 and it is unlikely that media use will affect long-term evaluations of people’s lives16. If anything, media use should influence short-term affect. Our results deliver weak evidence that this distinction also applies to traditional media. The few differences we observed appeared almost exclusively on the more volatile positive affect, not stable life satisfaction. These results align well with research that shows little to no long-term effects of new media on life satisfaction9,27,28,35. We deliver evidence that traditional media are unlikely to impact life satisfaction within the intermediate time frame of one week that we studied. At the same time, the few effects on affect were small, similar to research on social media with much shorter time lags34,38,39. Either we missed the optimal time lag after which the effects disappeared40 or net effects of traditional media are indeed negligible.

What do our results mean? The straightforward answer is: The effect of traditional media on well-being is too small to matter. However, such an answer might overlook important nuance. First, throughout this manuscript, we have spoken of between-person relations, but of within-person effects. As we have noted, within-person relations can be effects under the assumption that there are no time-varying confounders. Therefore, what we call effects is causal only under that assumption20,21,50. There might well be time-varying factors that mask a true effect51. For example, spending time using media may have a negative effect on well-being which gets balanced out by an indirect positive effect via less time worrying. Similarly, a stable confounder (e.g., employment status) might drive the small negative between-person relation. Alternatively, people who do not feel well might indeed be more inclined to pick up a new medium as a mood management strategy52.

Second, we only investigated the broad, net effect of traditional media. We did not assess what content people engaged with or what their motivation for use was. Although we believe such net effects are important to investigate as first step, they may mask important interactions between content and user motivations31,48,53. Therefore, even though within-effects of traditional media are small, there may be meaningful under certain conditions54. Such an argument aligns with research which found noteworthy variation in the effect of social media34. Third, we looked at an intermediate time lag of one week, which might have missed the effect. Therefore, to revise the answer from above: Under our assumptions of causality, the broad, net effect of traditional media during the week on well-being at the end of the week is likely too small to matter.


Limitations

Besides the questions of causality and scope of media use, there are several limitations to our study. The self-reported estimates of time spent with a medium we relied on will be almost certainly a noisy measure of true media engagement55,56. In addition to that noise, the measures also reminded participants of their response in the previous week. That reminder might have reduced variance or introduced bias. By contrast, we believe self-reports of use versus nonuse in a one-week period have lower measurement error, simply because there are more biases in retrieving exact estimates of the behavior compared to a dichotomous yes/no retrieval. We call for more research directly measuring media use. Similarly, although they displayed decent psychometric properties, the well-being measures in the data set were not validated. The measure of affect in particular referred to affective well-being on the previous day, not the previous week. Although it allows a sensible test of the cumulative effect of media use during the week on well-being at the end of that week, the opposite direction is less plausible: Affect at the beginning of the week might not be strong enough to influence media use during the week that follows. Most important, we did not assess social media use, which prevents us from a direct comparison of the effects of traditional media versus new media. Although our results fit into the larger picture of the literature, a direct comparison will be more informative.