Thursday, February 11, 2021

New York City/New Jersey subway average PM2.5 levels are 65 times greater than EPA standards, worse than the worst Chinese city

PM2.5 Concentration and Composition in Subway Systems in the Northeastern United States. David G. Luglio, Maria Katsigeorgis, Jade Hess, Rebecca Kim, John Adragna, Amna Raja, Colin Gordon, Jonathan Fine, George Thurston, Terry Gordon, and M.J. Ruzmyn Vilcassim. Environmental Health Perspectives, February 10 2021. https://doi.org/10.1289/EHP7202

Abstract

Objectives: The goals of this study were to assess the air quality in subway systems in the northeastern United States and estimate the health risks for transit workers and commuters.

Methods: We report real-time and gravimetric PM2.5 concentrations and particle composition from area samples collected in the subways of Philadelphia, Pennsylvania; Boston, Massachusetts; New York City, New York/New Jersey (NYC/NJ); and Washington, District of Columbia. A total of 71 stations across 12 transit lines were monitored during morning and evening rush hours.

Results: We observed variable and high PM2.5 concentrations for on-train and on-platform measurements during morning (from 0600 hours to 1000 hours) and evening (from 1500 hours to 1900 hours) rush hour across cities. Mean real-time PM2.5 concentrations in underground stations were 779±249, 548±207, 341±147, 327±136, and 112±46.7 μg/m3 for the PATH-NYC/NJ; MTA-NYC; Washington, DC; Boston; and Philadelphia transit systems, respectively. In contrast, the mean real-time ambient PM2.5 concentration taken above ground outside the subway stations of PATH-NYC/NJ; MTA-NYC; Washington, DC; Boston; and Philadelphia were 20.8±9.3, 24.1±9.3, 12.01±7.8, 10.0±2.7, and 12.6±12.6 μg/m3, respectively. Stations serviced by the PATH-NYC/NJ system had the highest mean gravimetric PM2.5 concentration, 1,020 μg/m3, ever reported for a subway system, including two 1-h gravimetric PM2.5 values of approximately 1,700 μg/m3 during rush hour at one PATH-NYC/NJ subway station. Iron and total carbon accounted for approximately 80% of the PM2.5 mass in a targeted subset of systems and stations.

Discussion: Our results document that there is an elevation in the PM2.5 concentrations across subway systems in the major urban centers of Northeastern United States during rush hours. Concentrations in some subway stations suggest that transit workers and commuters may be at increased risk according to U.S. federal environmental and occupational guidelines, depending on duration of exposure. This concern is highest for the PM2.5 concentrations encountered in the PATH-NYC/NJ transit system. Further research is urgently needed to identify the sources of PM2.5 and factors that contribute to high levels in individual stations and lines and to assess their potential health impacts on workers and/or commuters.

Discussion

Our measurements and analyses reveal variable and, in places, very high PM2.5 exposures of commuters and transit workers in the underground subway systems of northeastern U.S. cities. The most extreme exposure, identified in a subway station on the PATH system (serving NJ and NYC), was higher than the previously published values for any subway station in the world (Martins et al. 2016Moreno et al. 2017Qiu et al. 2017Van Ryswyk et al. 2017Xu and Hao 2017Lee et al. 2018Minguillón et al. 2018Mohsen et al. 2018Moreno and de Miguel 2018Choi et al. 2019Loxham and Nieuwenhuijsen 2019Pan et al. 2019Shen and Gao 2019Velasco et al. 2019Smith et al. 2020), with a mean gravimetric PM2.5 concentration greater than 1,000μg/m3PM2.5 (Figure 1). The MTA-serviced subway stations in Manhattan also had poor air quality, with an adjusted real-time mean±SDPM2.5 concentration of 548±207μg/m3.

Our particle measurements were similar to those measured previously in the MTA-NYC stations with high PM2.5 levels (Vilcassim et al. 2014) and much greater than aboveground ambient PM2.5 levels [it must be noted that the MTA-NYC subway stations monitored in the present study were a biased sample and chosen based on the high PM2.5 levels in the Vilcassim study (2014)]. Thus, during rush hour, the underground subway stations targeted in the NYC/NJ’s MTA and PATH subway systems had significantly worse air quality, in terms of PM2.5, than the targeted subway stations in Boston, Philadelphia, and Washington, DC. Philadelphia’s subway stations, for example, had better air quality, although the mean real-time PM2.5 concentration was still several fold greater than the mean ambient PM2.5 concentration measured outside the Philadelphia subway stations. In addition, we cannot rule out spurious differences due to uncontrolled sources of variation related to sampling. However, our findings clearly indicate that PM2.5 concentrations in underground stations and measured on subway trains are much greater than aboveground ambient PM2.5 levels, at least during rush hour periods. In addition, we measured extremely high concentrations in individual underground stations in the MTA (NYC) and PATH (NYC/NJ) subway systems that, even if they represent extreme levels for these stations, raise serious health concerns and warrant additional investigation. In addition, underground PM2.5 concentrations were consistently higher than mean ambient PM2.5 concentrations. Thus, our findings suggest that, at least in the northeastern U.S. transit systems included in our study, commuters are exposed to poor air quality during their time spent in underground subway stations. Moreover, exposures in at least some underground stations may be high enough to increase the risk of the adverse health effects associated with PM2.5, even if they occur for relatively short periods of time.

It should be noted that most subway air pollution studies have relied on real-time data collected with light scattering instruments (Xu and Hao 2017) that have been factory calibrated, in the traditional manner, with Arizona road-dust (Curtis et al. 2008Wang et al. 2016). Despite their many advantages (e.g., real-time data, autocorrection for temperature and RH), the output of real-time PM2.5 instruments can be affected by particle composition, shape, and water content, all physical factors that will variably affect light scattering. In the present study, we compared real-time and gravimetric PM2.5 concentrations during simultaneous 30- to 60-min sampling sessions conducted in the targeted subway systems (except MTA-NYC or the LIRR) and found, overall, that gravimetric values were 2 4 times greater than what was measured with the real-time light scattering device. This ratio is much higher than what has been reported for other environments and dust types (Wu et al. 2005Wang et al. 20162018Patts et al. 2019), and this difference is most likely due to the large (e.g., as high as 60% of the total PM2.5 mass) contribution of iron, a dense metal, to the airborne PM2.5 in the targeted subway systems. Therefore, we adjusted our real-time PM2.5 data with a correction factor. Thus, this real-time/gravimetric ratio issue should be considered when interpreting health risks using published data from air quality studies of subway systems conducted throughout the world. Note that most of the samples collected at underground stations in the present study were selected because they had the highest estimated real-time PM2.5 concentrations in each system.

One of the highest unadjusted real-time mean subway system concentrations previously reported was 265μg/m3 in Suzhou, China (Cao et al. 2017), whereas Seaton et al. (2005) and Smith et al. (2020) observed real-time, dust-type calibrated PM2.5 concentrations in a few stations in London, UK, that approached what was observed in PATH stations (Table S2), with a maximum 30-min mean concentration of 480μg/m3 at one London station. Notably, Smith et al. (2020) observed a single 1-min peak of 885μg/m3. The high pollution levels measured in London’s subways did not reach the upper range of the PM2.5 levels in the PATH subway stations and particularly in the Christopher Street Station, which had a maximum 1-h gravimetric PM2.5 concentration of 1,780μg/m3 during rush hour. The gravimetric PM2.5 concentrations measured at Newport Station, however, were more consistent with the peak values estimated in Smith et al. (2020). Comparison of our underground and ambient PM2.5 data strongly suggests that ambient PM2.5 is not a likely source of the high PM2.5 levels observed in NYC’s underground subway stations and that other sources such as the continual grinding of the train wheels against the rails, the electricity-collecting shoes, and diesel soot emissions from maintenance locomotives are important sources.

The contribution of TC to the PM2.5 mass concentration varied considerably among the two underground stations sampled at each of three transit systems (Table 3). TC constituted 6% of the particle composition in the PATH-NYC/NJ stations, whereas it composed 39% and 22% of PM2.5 in Boston and Washington, DC, respectively (Figure S4). Even within a single urban transit system, the TC concentrations varied between stations as was observed in Boston’s Government Center–Blue Line (177μg/m3) and Broadway stations (70.6±24.7μg/m3), albeit based on one and two samples, respectively. Broadway is an older station on the Red line, and Government Center is a much larger station with separate Blue and Green Line platforms and was renovated in the summer of 2016. Notably, TC, made primarily of the estimated OC component, dominated the Government Center–Blue Line aerosol, although the significance of this is unclear and further investigation into the sources of PM2.5 and the role of the mechanical design (e.g., ventilation) of each station is needed. Notably, there was relatively little EC (or the roughly equivalent BC2.5) present in any of the six underground subway stations, an unexpected finding given the emphasis that multiple papers (Vilcassim et al. 2014Choi et al. 2019) have placed on inorganic carbon species. A plausible source of EC would be diesel combustion in subway systems, for example, from diesel maintenance trains that operate in the MTA system. However, these trains are typically active only at night, and therefore their contribution to the composition of subway PM2.5 is unclear.

Iron accounted for the largest fraction of PM2.5 in the targeted subway stations, and frictional forces between the train wheels and rails and collection shoes and the third rail may account for this finding. The relative concentration of other elements was observed to vary among subway systems, suggesting that other sources (e.g., silicon as a marker for crustal material; arsenic as a marker for rodenticides) contribute to the airborne particles encountered by transit workers and commuters in subway stations. A previous report on PM composition in MTA stations in Manhattan agrees with the present findings. Chillrud et al. (2004) found similar ratios of iron/manganese, and chromium/manganese concentrations (i.e., components of different grades of steel), although some of the trace element concentrations in the present study are many times higher than those reported by Chillrud et al. Although other studies have documented low concentrations of noniron and carbon elements in subways (Minguillón et al. 2018Lee et al. 2018), results in Shanghai, China, found that aluminum, silicon, and calcium made up more than 30% of PM2.5 (Lu et al. 2015a), suggesting that ambient soil particles can contribute to subway PM. Similarly, in Beijing subways, the iron concentration was outweighed by aluminum, potassium, sodium, calcium, magnesium, zinc, and barium (Pan et al. 2019). Thus, significant differences in PM composition exist among the underground subway systems across the globe, and it is likely that these differences are a result of source contributors that vary among systems.

Our results demonstrate considerable variability regarding the air quality that transit workers and commuters may encounter in the subway stations of the major cities in northeastern United States. Not only does the PM2.5 concentration vary among stations and cities, but the elemental composition of PM2.5. Previous studies have demonstrated that underground depth (Vilcassim et al. 2014Figueroa-Lara et al. 2019), station volume, age (Van Ryswyk et al. 2017), and ventilation (Martins et al. 2016) all affect aerosol loading. Therefore, these subway system- and station-dependent differences were not unexpected in the present study. It is interesting to note that Martins et al. showed that more recently built stations do not necessarily have better air quality: The stations established in 2002 and 2009 in Oporto, Portugal, and Athens, Greece, had higher PM2.5 aerosol concentrations than a station built in 1983 in Barcelona, Spain. Nevertheless, there is evidence that common methods of reducing airborne PM are effective (Park et al. 2019), such as cleaning stations more often (Chen et al. 2017), improving ventilation (Moreno et al. 20142017), using particle removal systems, and installing shields that confine track-generated particles from the boarding passengers (Guo et al. 2014).

Study Limitations

In the present study, there were relatively few interline differences in air quality among the subway lines within each city (Figure S2 and Table S1). One exception was the Blue line, which exhibited the highest PM2.5 concentration of the three targeted Boston subway lines. Because the Blue line is the most recently built of the three targeted Boston subway lines and would presumably have the best ventilation design (based on our subjective observations), this finding was unexpected. A limitation of this observation, however, is that we sampled air quality in two Blue line stations that were high train-traffic areas. A similar intrasubway system difference was, however, also observed in NYC/NJ’s PATH system, where the mean PM2.5 level on the 33rd Street line was significantly greater than the World Trade Center line serviced by the PATH. As noted above, we did not collect information on all potential factors that might explain differences in air quality among different stations, subway lines, or transit systems.

We compared several transit systems using data collected at similar times of the day and generally within the same season. Thus, we have not cataloged the total potential variation of PM2.5 in each system. In particular, we sampled during a small number of days in Boston and Washington, DC, and our data generally represent only summer conditions. Thus, we do not know if the subway air quality changes significantly over season or time, although Van Ryswyk et al. (2017) have shown that the Toronto, Ontario, Canada, metro had higher PM2.5 concentrations than Montréal, Quebec, regardless of season. Another study limitation is that our PM2.5 and BC2.5 sample sizes for each station were relatively small (n=4 for most stations). Although this study design allowed us to compare subway systems, we lacked sufficient power to compare PM2.5 concentrations among individual station platforms within a city’s transit system. Nonetheless, certain stations were clearly more polluted than others. PM2.5 concentrations in the underground Christopher Street (PATH-NYC/NJ) were the highest among the 71 northeastern U.S. underground stations included in our study, and to our knowledge, were higher than any levels reported for any subway system across the globe (Martins et al. 2016Moreno et al. 2017Qiu et al. 2017Van Ryswyk et al. 2017Xu and Hao 2017Lee et al. 2018Minguillón et al. 2018Mohsen et al. 2018Moreno and de Miguel 2018Choi et al. 2019Loxham and Nieuwenhuijsen 2019Pan et al. 2019Shen and Gao 2019Velasco et al. 2019Smith et al. 2020).

In addition, it must be noted that the methodologies used to assess BC, OC, and EC were developed for measurement of PM composition collected under ambient outdoor conditions. The measurement of these carbon components was hampered, however, by the presence of large amounts, relative to ambient PM, of iron compounds. As demonstrated by other investigators, the dark color of some iron compounds can interfere with the reflectance measurement of BC, and the chemistry of the iron compounds found in subway PM shifts the transition demarcation of OC and EC in the thermal ramp used by the Sunset Instrument analyzer. Therefore, we chose to present our data as TC (total carbon) concentrations to avoid the latter limitation. Regardless, the potential for underestimation of OC (i.e., caused by high levels of iron in PM2.5 collected on quartz filters in the subways; Figure S4) does not lessen the importance of OC as a major component of the PM2.5 collected in several subway stations.

Implications

The key issue with underground subway exposures is whether there is a significant increase in the risk for adverse cardiovascular and respiratory outcomes, given the well-documented association between PM2.5 and adverse health effects. With one notable exception, the PM2.5 concentrations measured in subway stations during morning and evening rush hours were generally 2 to 7 times the U.S. EPA’s 24-h ambient air standard of 35μg/m3. The one exceptional station (Christopher Street Station) on the PATH subway line connecting NJ to lower Manhattan had a maximum 1-h PM2.5 concentration of 1,780μg/m3, with a mean gravimetric concentration of 1,254μg/m3 (n=4) (Table S3). If we assume that commuters are exposed to this level of PM2.5 for a typical 15-min total time (from/to home) spent on a subway platform and at 100μg/m3 for two 20-min rides on the PATH subway trains each day, then a commuter’s 24-h mean PM2.5 exposure concentration would increase from an assumed daily mean of 7.7μg/m3 (for NYC metropolitan area; U.S. EPA 2020b) to 26.1μg/m3. Given an association of a 6% increase in relative risk for each 10μg/m3 increase in long-term (e.g., annual average) PM2.5 (Pope et al. 2004), this exposure scenario suggests that a typical commuter would be at an 11% increase in risk for cardiovascular mortality. However, this calculation assumes that the toxicity of underground subway PM2.5 is similar to that of ambient PM, which is uncertain in the absence of much-needed subway–health studies. It must be emphasized that this increase in individual risk for daily commuters differs in comparison to that for transit workers who spend considerably longer periods of time on the subway platforms (e.g., 8-h work shifts). The impact of exposure on transit workers is unclear because although they would be exposed to significantly greater PM2.5 accumulated doses (i.e., increased exposure time and breathing rates), workers are often considered “healthy,” and the most relevant applicable occupational exposure guidelines are for larger-diameter respirable dust, defined as PM4.0 (OSHA’s Permissible Exposure Limit of 5mg/m3 and ACGIH’s threshold limit value of 3mg/m3). In conclusion, these findings of poor air quality in subway systems should prompt further investigation as to the levels, sources, composition, and human health effects of the PM2.5 pollution in subway systems. However, even in the absence of such data, the results of our study already indicate that the Precautionary Principal (Science for Environment Policy 2017) would call for mitigation efforts, such as improved ventilation to protect the tens of thousands of subway workers and millions of daily commuters from potentially unwarranted health risks.

What people “see” with retinal prostheses? Fundamentally, qualitatively different than natural vision, all invoked electrical stimuli to describe the appearance of their percepts; the process never ceased to be cognitively fatiguing

Erickson-Davis C, Korzybska H (2021) What do blind people “see” with retinal prostheses? Observations and qualitative reports of epiretinal implant users. PLoS ONE 16(2): e0229189. https://doi.org/10.1371/journal.pone.0229189

Abstract

Introduction: Retinal implants have now been approved and commercially available for certain clinical populations for over 5 years, with hundreds of individuals implanted, scores of them closely followed in research trials. Despite these numbers, however, few data are available that would help us answer basic questions regarding the nature and outcomes of artificial vision: what do recipients see when the device is turned on for the first time, and how does that change over time?

Methods: Semi-structured interviews and observations were undertaken at two sites in France and the UK with 16 recipients who had received either the Argus II or IRIS II devices. Data were collected at various time points in the process that implant recipients went through in receiving and learning to use the device, including initial evaluation, implantation, initial activation and systems fitting, re-education and finally post-education. These data were supplemented with data from interviews conducted with vision rehabilitation specialists at the clinical sites and clinical researchers at the device manufacturers (Second Sight and Pixium Vision). Observational and interview data were transcribed, coded and analyzed using an approach guided by Interpretative Phenomenological Analysis (IPA).

Results: Implant recipients described the perceptual experience produced by their epiretinal implants as fundamentally, qualitatively different than natural vision. All used terms that invoked electrical stimuli to describe the appearance of their percepts, yet the characteristics used to describe the percepts varied significantly between recipients. Artificial vision for these recipients was a highly specific, learned skill-set that combined particular bodily techniques, associative learning and deductive reasoning in order to build a “lexicon of flashes”—a distinct perceptual vocabulary that they then used to decompose, recompose and interpret their surroundings. The percept did not transform over time; rather, the recipient became better at interpreting the signals they received, using cognitive techniques. The process of using the device never ceased to be cognitively fatiguing, and did not come without risk or cost to the recipient. In exchange, recipients received hope and purpose through participation, as well as a new kind of sensory signal that may not have afforded practical or functional use in daily life but, for some, provided a kind of “contemplative perception” that recipients tailored to individualized activities.

Conclusion: Attending to the qualitative reports of implant recipients regarding the experience of artificial vision provides valuable information not captured by extant clinical outcome measures.


Discussion

We undertook ethnographic research with a population of retinal prosthesis implant recipients and vision rehab specialists, documenting the process of getting, learning to use and living with these devices.

We found that the perceptual experience produced by these devices is described by recipients as fundamentally, qualitatively different than natural vision. It is a phenomenon they describe using terms that invoke electric stimuli, and one that is ambiguous and variable across and sometimes within recipients. Artificial vision for these recipients is a highly specific learned skillset that combines particular bodily techniques, associative learning and deductive reasoning to build a “lexicon of flashes”—a distinct perceptual vocabulary—that they then use to decompose, recompose and interpret their surroundings. The percept does not transform over time; rather, the recipient can better learn to interpret the signals they receive. This process never ceases to be cognitively fatiguing and does not come without risk nor cost to the recipient. In exchange recipients can receive hope and purpose through participation, as well as a new kind of sensory signal that may not afford practical or functional use in daily life, but for some provides a kind of “contemplative perception” that recipients tailor to individualized activities. We expand on these findings below to explore what they mean in terms of the development and implementation of these devices, as well as for our understanding of artificial vision as a phenomenon.

What does it mean that the recipients describe artificial vision as being fundamentally, qualitatively “different” than natural vision? We believe that acknowledging that artificial vision is a unique sensory phenomenon might not only be more accurate, but it may also open up new avenues of use for these devices. Artificial vision may be considered as “visual” in terms of being similar to what recipients remember of the experience of certain kinds of light, as well as by offering the possibility of being able to understand features of the environmental surround at a distance. That being said, artificial vision was also described as both qualitatively and functionally different than the “natural” vision the recipients remember. It is in this way that the sensory experience provided by these devices could be viewed as less a restoration or replacement and more as a substitution; that is, as offering an entirely different or novel sensory tool. By shifting from the rhetoric of replacement or restoration to substitution we believe it could widen the bounds in which researchers and rehabilitation specialists think and operate with regard to how these devices are designed and implemented, potentially liberating a whole new spectrum of utility through the novel sensations these devices produce. Likewise, this shift could change the expectations of individuals receiving these devices, including addressing the initial disappointment that was expressed by many of our recipients when they encountered just how different the signals were to what they were expecting.

Second, acknowledging artificial vision as a unique sensory phenomenon also helps us understand the importance of qualitative description. The process of learning to use the device is a cooperative process between the rehabilitation specialist and the recipient, with the specialist guiding the recipient to attend to their perceptual experience and interpret it in specific ways. This process begins with the recipient learning to recognize how the basic unit of artificial vision—the phosphene—appears for them, and then describe that to the rehab specialist. The specialist then uses this information to guide the recipient in learning how the phosphenes correspond to features of the environment. It is a continuous and iterative communicative practice between the recipient and specialist that evolves over many months, during which stimuli are encountered, the recipient responds, and the specialist gives corrective or affirming feedback (with more or less description by the recipient and guidance by the specialist depending on the dynamic and need). The process is so specific to the dynamic between recipient and specialist that it can be considered to be “co-constructed” within their interactions.

Because each recipients’ qualitative experience is so distinct (phosphenes differ significantly between recipients so that no recipients’ perceptual experience is alike [60]) each process is tailored to the individual recipient by specific specialists. We found that certain vision rehabilitation specialists inquire in more depth about a recipient’s qualitative experience than others, using different methods, styles and techniques, and this can result in a different experience—and thus outcome—for the recipients. Our findings are based on reports captured either by directly asking the recipients about their experience or observing descriptions that were part of the rehabilitation process but that were by and large not recorded by the specialists nor relayed back to the companies, early stage researchers nor the individuals being implanted. That is, we found that there is no protocol in place for capturing or sharing recipient’s qualitative reports, including within the companies (between various clinical sites). Yet these kinds of data are essential to understanding these devices as well as in learning about artificial vision more generally, and thus deserve careful consideration by both researchers and clinicians who are developing and implanting these and similar devices, as well as to individuals and their families who are considering receiving them.

The better vision rehabilitation specialists are able to understand the recipient’s qualitative experience, the more they are enabled to assist them in learning to use the device. The more that early-stage researchers understand about how the parameters of the device correspond to perceptual experience, the better they are able to optimize design and implementation strategies. Finally, communicating these data to individuals and their families who are considering being implanted is essential. It would contribute to a more accurate understanding of the qualitative experience and process they are signing on for, and thus is an important part of informed consent. It would also help to address certain psychosocial difficulties we found recipients to experience. For instance, we found that the recipient’s percept does not change over time—that instead the recipient becomes better able at interpreting the signal received using cognitive techniques. It is a subtle distinction, but a profound one in terms of conditioning expectations around these devices—both of the researchers and the recipients. We found that current rhetoric employed by researchers and vision rehab specialists regarding neuroplasticity and the ability for recipients to transform the signal with enough practice has created a situation in which failure of the recipient to significantly transform the signal over time is perceived as a failure of the recipient (behaviorally, where the recipient is deemed to have insufficiently practiced using of the device, and/or physiologically, where the problem is located within the recipient’s eye or visual system). By shifting the expectation that it is not the percept itself, but the recipient’s ability to use the percept over time that can improve, one can potentially avoid and address the psychosocial distress that we found some recipients experienced as a result.

This study had several limitations, first and foremost the number of recipients limited by small study populations and availability of recipients. Future studies of these devices would do well to include similar qualitative reports from recipients, either as primary focus or as supplement to other outcome measures (i.e. as “mixed methods” studies that combine qualitative and quantitative methodologies). In addition, qualitative reports are only one type of data and are not meant to replace other forms of data being collected on these devices. Rather, we believe they deserve special attention because they have been heretofore neglected in the literature despite their potential to provide valuable information not captured by normative functional outcome measures. Qualitative data about recipients’ perceptual experience can both inform device design and rehabilitative techniques, as well as grant a more holistic understanding of the phenomenon of artificial vision. In addition to contributing to the larger body of work on visual prostheses, this study serves as a case example of the kind of data mobilized by qualitative, ethnographic methodology—in particular phenomenological inquiry—in study of brain machine interface devices.

These results indicate that despite dexterity and visual constraints, pigs have the capacity to acquire a joystick-operated video-game task

Acquisition of a Joystick-Operated Video Task by Pigs (Sus scrofa). Candace C. Croney1 and Sarah T. Boysen. Front. Psychol., February 11 2021. https://doi.org/10.3389/fpsyg.2021.631755

Abstract: The ability of two Panepinto micro pigs and two Yorkshire pigs (Sus scrofa) to acquire a joystick-operated video-game task was investigated. Subjects were trained to manipulate a joystick that controlled movement of a cursor displayed on a computer monitor. The pigs were required to move the cursor to make contact with three-, two-, or one-walled targets randomly allocated for position on the monitor, and a reward was provided if the cursor collided with a target. The video-task acquisition required conceptual understanding of the task, as well as skilled motor performance. Terminal performance revealed that all pigs were significantly above chance on first attempts to contact one-walled targets (p < 0.05). These results indicate that despite dexterity and visual constraints, pigs have the capacity to acquire a joystick-operated video-game task. Limitations in the joystick methodology suggest that future studies of the cognitive capacities of pigs and other domestic species may benefit from the use of touchscreens or other advanced computer-interfaced technology.

Discussion

Overall, all pigs performed significantly above chance on one-walled targets, which indicates that, to some extent, all acquired the association between the joystick and cursor movement. That the pigs achieved the level of success they did on a task that was significantly outside their normal frame of reference in itself remarkable, and indicative of their behavioral and cognitive flexibility. Their high level of social motivation to perform the task was also noteworthy. Although food rewards associated with the task were likely a motivating factor, the social contact the pigs experienced with their trainer also appeared to be very important. Occasionally, during some sessions, equipment failures resulted in non-reward following correct responses. On these occasions, the pigs continued to make correct responses when rewarded only with verbal and tactile reinforcement from the experimenter, who was also their primary caretaker. Additionally, during times when the task demands seemed most challenging for the pigs, and resulted in reluctance to perform, only verbal encouragement by the experimenter was effective in resuming training. This may have been due to the strong bond the pigs developed with the experimenter during training, which would support the assertion of Boysen (1992) that the human-animal bond is a crucial element in the success of animals used in studies of comparative cognition.

It should be noted that despite performing above chance on the SIDE task, even the pig that performed best did not approach the level attained by non–human primates that acquired the task after a comparable number of trials (see Hopkins et al., 1996). Indeed, none of the pigs was able to meet the criteria of Hopkins et al. (1996) for demonstrating motoric or conceptual acquisition of the SIDE task. There are several possible explanations for the pigs’ failure to meet they criteria. First, they were established for dexterous primates (rhesus monkeys and chimpanzees); although no clear rationale was provided for their adoption. Thus, it was difficult to know how to adapt those criteria for pigs, taking into account their more limited perceptual and motor capabilities, which clearly differ from primates. For example, the visual demands of the task may have been particularly problematic for the pigs, since we had previously established that all four subjects were far-sighted. As sufficient visual capability is a prerequisite for successful completion of a joystick-operated-video game task, and despite attempts to position the computer monitor appropriately, it is impossible to know how well the pigs were able to see, and subsequently correctly discriminate between targets. Furthermore, because of the positioning of the pigs’ eyes relative to their snouts, they were often forced to watch the screen prior to moving the joystick, and then check their progress after cursor movement was initiated. This artifact of the pigs’ anatomy likely contributed to some of their errors because in order to succeed, they not only needed dexterity and conceptual understanding of the task, but perhaps also short-term or working memory (which is not well understood in pigs) of the target position locations.

In addition, the pigs’ limited dexterity no doubt constrained their performance. Because the joystick-operated video-task paradigm was initially designed for use by non-human primates with great manual dexterity, modifications to the equipment were necessary so that the pigs could use their snouts to manipulate the joystick. However, the pigs’ ability for such manipulation was restricted to their normal range of head and neck movements. This limitation appeared particularly troublesome for the Yorkshire pigs whose larger size also constrained their ability to reposition themselves as needed to contact targets located in the horizontal plane. Thus, it was not surprising that the Yorkshire pigs performed better on vertical plane movements, which are more frequently seen in their normal behavioral repertoire during routine activities such as rooting. In fact, when faced with left or right targets, the Yorkshire subjects were often observed to alter their stance so that they were parallel to the computer screen. This way, they could approach horizontal targets in the same way they did for those in the vertical plane. Because of their small size, the micro pigs were better able to reorient themselves as needed to view the computer monitor and complete horizontal plane movements. This flexibility likely resulted in better performance in both planes and may have contributed to their superior performance compared to the Yorkshire subjects. Ebony and Ivory’s smaller size also enabled them to be maintained in the laboratory for a much longer period for training and testing (15 months) than the Yorkshire pigs. Thus, they were afforded the opportunity to continue training, thereby contributing to their improved performance on the SIDE task. Consequently, their terminal performance was much better than the Yorkshire pigs that were trained for only 10 weeks on the same task.

Additional problems that may have been attributable to dexterity limitations were observed when the pigs were unable to completely move the cursor toward a target wall and finish the trial, simply because of the angle at which the cursor approached the target. On these occasions, the pigs often nosed the joystick to move the cursor back out of the target wall and then altered the angle at which they approached the target. However, in doing so, they sometimes contacted an incorrect wall, resulting in reduced accuracy on their first cursor attempts. Further, when the pigs were unable to make contact with a horizontal target, they often resorted to strategies that allowed them to move the cursor upward, then down into the correct left or right wall. These responses were consistently observed, particularly for Hamlet and Omelet, who systematically responded with a series of movements that resembled an “inverted v” when faced with right or left targets. The resultant asymmetry in the pigs’ performance relative to target position is similar to that observed in rhesus monkeys (Hopkins et al., 1996). In comparing the performance of rhesus monkeys to chimpanzees on the SIDE task, Hopkins et al. (1996) observed that the monkeys had more difficulty responding to horizontal targets, suggesting that their manipulative behavior was less diverse than chimpanzees. This problem may, in part, explain the pigs’ poor performance relative to primates, as their ability to manipulate objects is significantly less dexterous and flexible.

Response biases can often be inevitable when testing animals, and they emerged during testing with the pigs as well. For example, while Ebony, like all of the subjects, showed some level of side bias (left), he did correctly move the cursor to the right numerous times on all but the one-wall task. As previously noted, these trials created the smallest targets for the pigs. Side bias training was instituted for all pigs manually upon observation of biases because although the software titrated to an easier level of task difficulty if a subject made errors consistently, the program’s random generation of target locations did not facilitate training to overcome bias. This intervention was not successful, however. Learning on manual side-bias training with objects or with the joystick with the computer turned off (necessary given the previously mentioned software limitations) did not appear to generalize to the joystick-operated task. A few explanations for this observation are plausible. First, Ebony may simply have been limited in either or both dexterity and the paw/snout/eye-coordination needed to hit right-sided, one-walled targets. It is also possible that because the video-task apparatus was not centered in the pen due to constraints of the testing space, Ebony’s body positioning to complete such tasks may have further constrained his performance given that additional training did not correct the side-bias problem with the joystick, although it was effective on bias correction using objects (Croney, 1999). It is also possible that some degree of instinctive drift may have impacted his and the other pigs’ performance, especially as the tasks became more challenging and rewards for behaviors performed were reduced due to errors.

An alternative explanation for the difference between the pigs’ and primates’ performance that must be considered is that the pigs may have been unable to fully comprehend the concepts required to perform well on the SIDE task. Difficulties with the conceptual component of the task may have been due, in part, to the spatial discontiguity of the stimulus and response. Meyer et al. (1965) suggested that a primate’s learning efficiency might be impaired when the hand used to execute a response was placed in an area distant from the location of the discriminative stimuli. A similar rationale may have been a factor for the pigs, since the movement of their snouts was some distance from the images displayed on the monitor, and the lateral placement of their eyes may have contributed to a cognitive disconnect between their movements and the resulting changes appearing on the screen.

In addition to the difficulties posed by limited dexterity and vision, several methodological factors may also have impeded the pigs’ performance on the SIDE task. First, because a protocol for testing pigs using the joystick-operated video-game task paradigm had not previously been established, the methods used in the current experiment were exploratory. As such, some changes in procedures and equipment were necessary during the experiment to correct concerns as they emerged. For instance, early design flaws in the joystick apparatus were detected and required correction. Initially, the protective welded plastic area surrounding the joystick was too high and impeded movement of the joystick in all directions. In addition, positioning of the feed delivery tube attached to the automatic dispenser sometimes resulted in failure to deliver rewards to the pigs after correct responses early in training and required correcting. This delay in reinforcement following a correct response may have impeded the animals’ initial learning. Finally, the test pen was designed so that the joystick apparatus was positioned approximately 0.04 m away from the right side of the pen. This initial positioning proved to be significant in that it restricted the pigs’ abilities to stand or move to the right of the joystick.

Initial training procedures also proved to be problematic. One problem in the training process was that the pigs were allowed to work at their own pace, which resulted in a large set of data consisting primarily of four- and three-sided tasks. After the protocol was amended to require performance of a minimum number of two- and one-walled targets during each session, improved performance on these conditions was observed. However, the Yorkshire pigs had been terminated from testing by the time procedures were revised, and thus did not benefit from the revision. Moreover, this change in training made it extremely difficult for the micro pigs to achieve stringent criteria of Hopkins et al. (1996) for all facets of task acquisition.

Taken together, the failure of all subjects to meet the criteria for SIDE task acquisition may reflect the limitations first imposed by procedural methodology issues, and visual and motor skill limitations, rather than learning deficits. Although their performance was limited compared to primates tested, that they were able to perform as successfully as they did on one-walled targets suggests they acquired some important aspects of the task demands. However, it is impossible to determine to what extent their ability to demonstrate conceptual understanding of the SIDE task may have been constrained by their perceptual and motor capacities. Nonetheless, evaluation of their terminal test results showed that all pigs improved their performance with respect to the various target positions. This improvement was particularly noteworthy for the Yorkshire pigs (Hamlet and Omelet), who completed only a few 100 trials in their 10 weeks of training on the task. Furthermore, the high level of performance attained by one of the micro pigs (Ivory), regardless of target position or number of walls, strongly suggests some level of conceptual acquisition of the task.

In summary, the results of the present study underscore the importance of understanding the basic perceptual and motor capabilities of a species prior to developing appropriate methods of testing their cognitive abilities. While the joystick-operated video-game paradigm has proven suitable for testing several species, including monkeys, pigeons, and chimpanzees (Rumbaugh et al., 1989Washburn et al., 1990Spetch et al., 1992Hopkins et al., 1996), it is not optimal for testing the cognitive abilities of pigs, as their performance was clearly hindered by dexterity limitations and visual constraints. Thorough investigations of the pig’s visual and motor capabilities are necessary before their cognitive abilities can be adequately assessed using this or any type of technology. Use of a computer touch screen may better address the problem of limited dexterity and would likely provide a more viable alternative in future computer-interfaced studies of the cognitive abilities of pigs.

We find that sharing personal experiences about a political issue—especially experiences involving harm—help to foster respect via increased perceptions of rationality

Personal experiences bridge moral and political divides better than facts. Emily Kubin et al. Proceedings of the National Academy of Sciences, February 9, 2021 118 (6) e2008389118; https://doi.org/10.1073/pnas.2008389118

Significance: All Americans are affected by rising political polarization, whether because of a gridlocked Congress or antagonistic holiday dinners. People believe that facts are essential for earning the respect of political adversaries, but our research shows that this belief is wrong. We find that sharing personal experiences about a political issue—especially experiences involving harm—help to foster respect via increased perceptions of rationality. This research provides a straightforward pathway for increasing moral understanding and decreasing political intolerance. These findings also raise questions about how science and society should understand the nature of truth in the era of “fake news.” In moral and political disagreements, everyday people treat subjective experiences as truer than objective facts.

Abstract: Both liberals and conservatives believe that using facts in political discussions helps to foster mutual respect, but 15 studies—across multiple methodologies and issues—show that these beliefs are mistaken. Political opponents respect moral beliefs more when they are supported by personal experiences, not facts. The respect-inducing power of personal experiences is revealed by survey studies across various political topics, a field study of conversations about guns, an analysis of YouTube comments from abortion opinion videos, and an archival analysis of 137 interview transcripts from Fox News and CNN. The personal experiences most likely to encourage respect from opponents are issue-relevant and involve harm. Mediation analyses reveal that these harm-related personal experiences increase respect by increasing perceptions of rationality: everyone can appreciate that avoiding harm is rational, even in people who hold different beliefs about guns, taxes, immigration, and the environment. Studies show that people believe in the truth of both facts and personal experiences in nonmoral disagreement; however, in moral disagreements, subjective experiences seem truer (i.e., are doubted less) than objective facts. These results provide a concrete demonstration of how to bridge moral divides while also revealing how our intuitions can lead us astray. Stretching back to the Enlightenment, philosophers and scientists have privileged objective facts over experiences in the pursuit of truth. However, furnishing perceptions of truth within moral disagreements is better accomplished by sharing subjective experiences, not by providing facts.

Keywords: moralitypoliticspoltical tolerancemoral psychologynarrative