Monday, December 30, 2019

Impacts of market integration on the development of American manufacturing, as railroads expanded through the latter half of the XIX century: Much larger aggregate economic gains than previous estimates

Railroads, Reallocation, and the Rise of American Manufacturing. Richard Hornbeck, Martin Rotemberg. NBER Working Paper No. 26594, December 2019.

Program: We examine impacts of market integration on the development of American manufacturing, as railroads expanded through the latter half of the 19th century. Using new county-by-industry data from the Census of Manufactures, we estimate substantial impacts on manufacturing productivity from relative increases in county market access as railroads expanded. In particular, the railroads increased economic activity in marginally productive counties. Allowing for the presence of factor misallocation generates much larger aggregate economic gains from the railroads than previous estimates. Our estimates highlight how broadly-used infrastructure or technologies can have much larger economic impacts when there are inefficiencies in the economy.

VI Interpretation

We estimate that the railroads substantially increased the scale of the United States’ economy: increasing the production and use of materials, spurring increased capital investment,
and encouraging population growth. The economic consequences of this expansion are substantially greater than previously considered because, in most counties, the value marginal
products of materials, capital, and labor were greater than their marginal costs. We do not
estimate that railroads reduced these market distortions, whether due to firm markups or
input frictions, but the railroads generated substantial economic gains by encouraging the
expansion of an economy with market distortions.
We calculate that aggregate productivity would have been 25% lower in 1890 in the
absence of the railroads, through declines in reallocative efficiency alone. We assume that
technical efficiency would have been unchanged in the counterfactuals, but this decline in
reallocative efficiency is equivalent to a 25% decline in technical efficiency (or total factor
productivity, TFP). It is challenging to estimate aggregate TFP growth, with the proper
price deflators, but estimates suggest that annual TFP growth was approximately 0.37%
from 1855 to 1890 and 1.24% from 1890 to 1927 (Abramovitz and David, 1973). That is,
the railroads effectively contributed 31 years worth of technological innovation, by driving
increases in reallocative efficiency.79
The railroads’ 25% impact on productivity was worth 25% of GDP in 1890, or $3 billion
in 1890 dollars. As a comparison, the estimated cost of the railroad network in 1890 was $8
billion (Adams, 1895). We estimate that the railroads generated an annual private return
of 3.5% in 1890,80 which increases to an annual social return of 7.5% – 8.3% once including
estimates from Fogel (1964) or Donaldson and Hornbeck (2016) and increases to 43% when
also including our estimated impact on productivity.81 These estimates imply that the
railroad sector was capturing roughly 8% of its social return in 1890.
Our estimated increases in productivity do not include the direct benefits of the railroads
from decreasing resources spent on transportation. To see this, consider that we would mechanically estimate no impact on productivity from the railroads if there were no differences
between value marginal product and marginal cost, whereas the economy would still benefit
through decreases in transportation costs. In our model, those decreases in transportation
costs are capitalized into higher land values.
Donaldson and Hornbeck (2016) estimate that agricultural land values would have fallen
by 60% without the railroad network, which, multiplying by an interest rate, generates
annual economic losses equal to 3.2% of GDP. The total loss of all agricultural land would
only generate annual economic losses equal to 5.35% of GDP, so an analysis of agricultural
land values could never find larger economic impacts.82
The crucial difference in our approaches is that Donaldson and Hornbeck (2016) assume
an efficient economy, in which value marginal product is equal to marginal cost, such that all
output value is paid to factors. By contrast, our estimated increases in reallocative efficiency
reflect the creation of output value that is not paid to factors. In both of our analyses, the
railroads increase the scale of the US economy, but because we allow for the marginal value
of product to exceed marginal costs, this increase in economic activity generates surplus
or “profit” that is reflected in aggregate productivity growth rather than increases in land
values. Further, our estimated impacts on productivity do not include any economic gains
reflected in increased factor payments, and so our estimated impact on productivity is in
addition to impacts on land value.
A general implication for measuring economic incidence is that factor payments do not
include all economic gains when there are market distortions. More inelastically supplied
factors will bear more economic incidence, but there are additional economic gains that are
not captured by factor payments. We show that these additional economic gains can be
substantively large, particularly when new infrastructure investment or new technologies are
broadly used and encourage broad expansion of economic activity.
The additional economic gains, from decreasing resources spent on transportation, could
instead be measured directly by calculating the decreases in transportation costs using the
railroads instead of the waterways. This is precisely the social savings calculation in Fogel
(1964), which implies that our estimated impact on productivity is in addition to Fogel’s
estimate of 2.7% of GDP.
In considering why Fogel’s estimates do not include our estimated economic gains, we
highlight the importance of resource misallocation in welfare analysis more generally. Fogel
(1964) proposes a social savings calculation to bound the economic gains from the railroads.
Fogel focuses on the transportation sector, and looks to calculate the additional cost from
using waterways to transport goods instead of the railroads. This calculation is closely
related to aggregate productivity in the transportation sector, measured as revenue minus
costs: for transporting the same quantity of goods (fixing revenue), calculating the increase
in costs without the railroads.83
David (1969) critiques Fogel’s calculations on several grounds, but much of this critique
is essentially calling attention to Fogel’s implicit assumption that value marginal product is
equal to marginal cost.84 This assumption is required for the increase in transportation costs
without the railroads to equal the value lost from decreased production in non-transportation
sectors. David (1969) proposes that this assumption would be violated by increasing returns
to scale, and Fogel (1979) responds by disputing the empirical magnitude of increasing
returns to scale.85 Fogel (1979) also makes this assumption more explicit: that in nontransportation sectors, firms’ value marginal product is equal to their marginal cost.
Our analysis relaxes this assumption, and estimates the economic consequences from
a broader range of market distortions, which restates the above critique by David (1969).
Rather than appealing to increasing returns to scale, we allow for a wide variety of distortions that can drive a wedge between the social benefit and private cost of firms expanding
production (e.g., firm markups, credit constraints, taxes and regulation, imperfect property
rights).86 The railroads decrease transportation costs, effectively subsidizing the expansion
of economic activities throughout the economy that have a positive social return (i.e., whose
value marginal product exceeds their marginal cost).
Fogel (1964, 1979) emphasizes that assuming an inelastic demand for transportation
provides an upper bound estimate on the railroads’ impacts, for the social savings calculation,
but the opposite is true in the presence of market distortions. A greater elasticity of demand
for transportation magnifies the economic impacts of the railroads by yielding greater changes
in activities whose value marginal product exceeds marginal cost across other sectors in the
economy. Fogel does not consider the indirect losses in other sectors due to reductions in
transported goods, because he aims to calculate the costs of maintaining the same levels of
transportation, but it is precisely because transportation would fall that there are such large
indirect losses in other sectors.87
More generally, there is an analogous need to consider resource misallocation in partial
equilibrium welfare analysis. Harberger (1964) lays the foundation for much welfare analysis
in economics, using the example of calculating the economic cost of a tax, making a powerful
assumption that there are no other distortions in the economy. This assumption means that
it is not necessary to consider how a marginal tax affects other activities, which reflect
only small welfare “triangles,” and the welfare effects of the tax are largely captured by the
demand curve for the taxed activity.88 Harberger (1964) makes this assumption clear, and
notes that it probably has the effect of understating the true cost of a tax, but this assumption
is often overlooked in applications due to its substantial analytical convenience.89 In Fogel’s
application, when analyzing the impacts of a higher transportation cost (similar to a higher
tax), the demand curve for transported goods is used to capture the welfare effects, and the
mistake is to not consider impacts from resulting changes in other activities.
Our estimated impacts of the railroads are a reminder that indirect effects on other
economic activities can generate substantial economic benefits, which are missed in partial
equilibrium welfare analysis. When there is resource misallocation, such as due to firm
markups or capital constraints, and other activities are under-provided then there are firstorder welfare gains from their encouragement. Only in a special case, when there are no
market distortions and other economic activities are efficient, can we invoke the envelope
theorem and consider only the direct economic effects

British adolescent twins study: Contact with the justice system—through spending a night in jail/prison, being issued an anti‐social behaviour order (ASBO), or having an official record—promotes delinquency

Does contact with the justice system deter or promote future delinquency? Results from a longitudinal study of British adolescent twins. Ryan T. Motz et al. Criminology, December 29 2019.

Abstract: What impact does formal punishment have on antisocial conduct—does it deter or promote it? The findings from a long line of research on the labeling tradition indicate formal punishments have the opposite‐of‐intended consequence of promoting future misbehavior. In another body of work, the results show support for deterrence‐based hypotheses that punishment deters future misbehavior. So, which is it? We draw on a nationally representative sample of British adolescent twins from the Environmental Risk (E‐Risk) Longitudinal Twin Study to perform a robust test of the deterrence versus labeling question. We leverage a powerful research design in which twins can serve as the counterfactual for their co‐twin, thereby ruling out many sources of confounding that have likely impacted prior studies. The pattern of findings provides support for labeling theory, showing that contact with the justice system—through spending a night in jail/prison, being issued an anti‐social behaviour order (ASBO), or having an official record—promotes delinquency. We conclude by discussing the impact these findings may have on criminologists’ and practitioners’ perspective on the role of the juvenile justice system in society.

Keywords: delinquency     family fixed effects     labeling     specific deterrence     twins


We sought to conduct a rigorous test between deterrence and labeling hypotheses. Drawing on data from a nationally representative and longitudinal birth cohort of British adolescent twins, we found that contact with the justice system—through spending a night in jail/prison, being issued an ASBO, or having an official crime record—promotes misbehavior, which supports the labeling hypothesis. With this in mind, we highlight four contributions from this study that warrant consideration. We then discuss some of the broader implications our findings might have for the justice system.

4.1 Contributions

First, we followed the call of previous research (see, e.g., Farrington, 2003; Murray et al., 2009; Piquero et al., 2011; Pogarsky, 2002) and employed one of the most rigorous nonexperimental methodological designs capable of accounting for a wide range of selection effects and confounding influences. Using the family fixed‐effects model (Kohler et al., 2011), we leveraged nationally representative twin data to take advantage of the natural experiment that twins provide by the fact that they share their family environment and their genetic endowments. Such family effects work to make twins similar to one another. By focusing on within‐twin pair differences, then, we were able to rule out the effects of these family environments and genetic influences, providing us the opportunity to glean some of the most precise estimates for the impact of justice system contact on future behavior. In doing so, we have demonstrated that twin samples and methods have utility for criminological theory testing that reaches beyond the typical strategy of estimating heritability (see Moffitt & Beckley, 2015).
A second feature of this study is that we drew on three separate measures—two that were self‐reported and one obtained from official Ministry of Justice records—of justice system contact. The pattern of findings was substantively consistent across these specifications, providing robust support for the labeling hypothesis. The findings across such forms of contact demonstrate that even sanctions that do not penetrate far into the justice system are potentially criminogenic, an outcome that has important implications for policy. Of interest to labeling theorists, the effect of ASBO was found to be a substantively strong predictor of later misbehavior. This is important because, in our opinion, the ASBO represented an archetypal label—recall that it was not intended to be punitive; rather, it was intended to be preventative by identifying those who were at risk of bad behavior. It was also intended to be a public label, and that is exactly the effect it seemed to have had. We believe the findings from the ASBO analysis are particularly revealing given this context even though ASBOs are no longer in use.
Third, we analyzed as an outcome broad‐spectrum delinquency rather than an official outcome (e.g., rearrest or reconviction) that is more commonly assessed in the deterrence and labeling literatures. The results for justice system outcomes like rearrest may be biased because individuals who experience such contact are often at an increased risk for future contact with the justice system simply because they are known by its actors, such as arresting police officers. An outcome variable such as delinquency, therefore, allowed for us to observe change in behavior that is not biased by the actions of justice system actors. Furthermore, by relying on self‐reported delinquency, we can capture delinquent and illegal acts done by the participants that may not be known to the justice system, which would not be captured if we were to rely on official records. For these reasons, we believe the focus on self‐reported delinquency represents an important contribution to the labeling literature.
Fourth, we relied on a sample of individuals who are within the primary age range for engaging in antisocial behavior (i.e., 18‐year‐olds). This is important as it captures the impact of justice system contact for those who are peaking in their criminal careers. The impact of such contact for this population is notable as the increase in problem behavior may lead to a downward spiral of cumulative continuity for certain youth (e.g., Caspi, Bem, & Elder, 1989; Moffitt, 1993; Nagin & Paternoster, 1991; Sampson & Laub, 1992). Yet it should be noted that, at this time, we cannot observe how the increases in scores for delinquency will go on to affect participants’ criminal trajectories. Follow‐up analyses of this cohort with future phases of data collection will be better suited to answer that question.

4.3 Broader considerations

With the contributions of this study in mind, we now consider the broader substantive, theoretical, and ethical concerns that may stem from them. Particularly, we focus on the concerns revolving around the role of the justice system and its impact on juveniles. With evidence that the impact of contact with the justice system is a substantively negative one, an interesting question can be raised: Why would we have expected contact with the justice system to have a deterrent effect? Perhaps if justice system contact caused people to “fear their future self,” we would see deterrent effects (Paternoster & Bushway, 2009). But what we found is that justice system contact may have the opposite effect—rather than causing people to fear their future self, it may cause them to lose confidence in their future self. Therefore, the current system may work in a way that does not motivate individuals to conform to the norms of society. Instead, it leads young people to doubt their ability to get themselves out of the hole they have dug.
This makes sense when we consider the real‐life consequences of spending time in jail—the event itself is often embarrassing and shameful. Typically, it consists of (at least) an overnight stay followed by a visit with a judge the next morning. The family often has to get involved for the young person to be released back into the community, which then causes anger, hostility, and embarrassment among family members. Given that family is an important part of the desistance process, weakening those social bonds is unlikely to have a crime‐reducing effect. Furthermore, these reactions are often extended out to other interpersonal relationships in different settings, and as these relationships are ruined, prosocial connections are further attenuated, pushing the labeled adolescent further away from conventional society.
What does this mean for the justice system as it is currently constructed? We do not believe our findings show support for a shift to nonintervention. Rather, we believe it is important for the justice system and its actors to recognize the potentially negative impact it has. The public should be aware that the system is for the purpose of justice and retribution and that a utilitarian outcome such as specific deterrence is unlikely. With this in mind, our findings can be used to extend two policy recommendations.
First, although not a test of these hypotheses, we believe our findings fall in line with the arguments of the principles of effective intervention (see Andrews, 1995; Bonta & Andrews, 2016; Gendreau, 1996), which propose low‐risk offenders should not be funneled through official justice system channels. There should be diversionary programs set up for these types of offenders so that they may be able to avoid the labeling process. A metaphor might help explain: Medical doctors do not send a patient suffering from a cold to the emergency room. Even though the patient can certainly get treatment there, the visit would likely be counterproductive as the patient would be exposed to far more harmful viruses and diseases that may ultimately result in worse health. Study findings have repeatedly shown that when low‐risk offenders are brought into the justice system, the outcome is almost exclusively iatrogenic (e.g., Gatti, Tremblay, & Vitaro, 2009; Lowenkamp, Latessa, & Holsinger, 2006; Nagin, Cullen, & Jonson, 2009; Sperber, Latessa, & Makarios, 2013).
Second, our findings do not show support for fewer (or more) juvenile arrests. But they do indicate that if arrest rates are going to be maintained at their current level (or if they are to be heightened), then there should be a concerted effort toward offsetting the negative pathways that they create. If policy makers gain an understanding of these processes and pathways, they can develop and implement strategies to prevent labeling effects. Only then will the system have a chance of deterring criminal activity by way of contact with the offender.

Weak evidence that the national homicide rate spiked in 2015: The 2015 homicide rate increased above the 90% prediction interval for our model, but not more conservative intervals

Yim, Ha-Neul, Jordan R. Riddell, and Andrew P. Wheeler. 2019. “Is the Recent Increase in National Homicide Abnormal? Testing the Application of Fan Charts in Monitoring National Homicide Trends over Time.” SocArXiv. November 4. doi:10.31235/

Purpose: The goal of this study is to compare the increase in the 2015 national homicide rate to the historical data series and other violent crime rate changes. 
Methods: We use ARIMA models and a one-step ahead forecasting technique to predict national homicide, rape, robbery, and aggravated assault rates in the United States. Annual Uniform Crime Report data published by the Federal Bureau of Investigation are used in our analysis.
Results: The 2015 homicide rate increased above the 90% prediction interval for our model, but not more conservative intervals. Predictions intervals for other national level crime rates consistently produced correct coverage using our forecasting approach.
Conclusions: Our findings provide weak evidence that the national homicide rate spiked in 2015, though data for 2016 – 2018 do not show a continued anomalous increase in the U.S. homicide rate.

Data and code to replicate the findings can be downloaded from


Media outlets reported the seemingly abrupt increase in homicide in big cities in the United
States and scholars examining homicide trends also argued the change in homicide was substantial
enough to demand scrutiny and attention in the interest of public safety (Rosenfeld et al., 2017).
Although a growing number of studies have explored the significance of the 2015 increase in homicide,
their findings are limited by a reliance on percent change or trend analysis, which have limitations in
finding the abnormal patterns in recent changes over time (Wheeler, 2016; Wheeler & Kovandzic, 2018).
This study builds upon the prior literature by using a forecasting method with an accompanying fan
chart to examine whether the 2015 homicide rate increase is significant or just a result of random
fluctuations. Past research using percent change techniques has suggested the 2015 increase in the
national homicide rate was significant. In conjunction with their findings, we offer a similar
interpretation of the homicide increase in 2015, although having arrived at the conclusion through the
use of ARIMA prediction models and fan charts – strengthening the overall conclusion that the national
homicide rate significantly increased in 2015. Our model estimates that the 2015 increase was near a 1
in 100 chance occurrence given the historical data, which we believe is sufficient to suggest that 2015
was an anomalous increase. Although such a bright line is ultimately arguable, individuals can set such a
threshold themselves to determine if such a chance occurrence is worth further investigation or any
subsequent responses.
Our comparative analysis indicates that the change in 2015 is pronounced only in homicides and
not in other violent crime. The analysis above for homicides finds that the 90% prediction intervals
failed to cover the 2015 homicide rise while observed annual rates for the other three crime types are
covered by the prediction bands. The overall temporal patterns for robbery, a crime that trends closely
with homicide, are similar to those of homicide during the predicted period; marked by a sudden decline
in the 1990s, slight fluctuations in the 2000s, and decreasing and seemingly spiking patterns in the 2010s.
Despite the relative similarity in the patterns between homicide and robbery, the homicide rise in 2015
was significant while the increase in robbery was not. In short, comparative analysis found that the
rising pattern is only significant in homicides, implying the rise in the 2015 homicide rate was abnormal
compared to other violent crimes. It is important to note here that while homicide, robbery, and
aggravated assault rates increased from 2014 to 2016 and decreased from 2016 to 2018, the rape rate
failed to decline and increased from 2013 to 2018. It is beyond the scope of our work here to explain the
reason the rape rate did not decrease after 2016 as the other violent crime rates did, but future research
can work to uncover potential causal factors.
Even though our results align with prior studies, our paper contributes to the literature on
temporal crime patterns by suggesting and demonstrating an alternative way to monitor crime trends and
annual fluctuations in crime statistics. Federal government employees have mainly used percent change
calculations in their official reports, which can result in chasing the noise and inefficient uses of
resources. As a result, policy implemented in response to such analysis may not be as effective or could
result in unintended consequences; however, the primary concern here is that percent change
calculations cause one to draw the conclusion that a policy response is necessary when it may not be
needed. For example, the government responded to the Police Executive Research Forum (PERF)
request for financial support in response to crime increases from 2004 to 2006 by funding them to hire
more police officers, increase training and provide technical assistance. However, Rosenfeld and Oliver
(2008) later found the crime increase was not substantial and just the result of external variables rather
than a lack of law enforcement officers. Similar inferences may be made based on more recent trends,
where short term fluctuations should not be viewed as so anomalous that they demand immediate
attention and investment. If practitioners or policymakers employ the suggested method in the policymaking
process, it may lead to timelier and more effective policy responses. Lawmakers and police
departments can use the forecasting method with daily, weekly, monthly, or yearly data to estimate the
number of police officers they will need to respond to a certain number of crime incidents. Academics
can do the same to produce new knowledge of crime trends without relying on a method that is biased
and volatile (percent change) or having to wait to complete a retrospective study of longer term trends
(via structural breaks).
The methods used in this study can be also be directly applied by criminal justice practitioners.
Specifically, the one-step ahead forecasting technique can be applied as part of the problem-oriented
policing approach to law enforcement crime prevention strategies. State and federal prison bureaus can
use this to estimate future prison populations and ensure they have the appropriate capacity each year.
Prosecutors, defense attorneys, and courts at any level can do the same to adequately provide counsel
and other court functions for cases in their jurisdiction. Utilizing ARIMA models instead of percent
change or trend analysis may be able to functionally improve budget and personnel appropriations.
A current limitation of applying such advice is that the FBI is often a year behind in reporting
national level homicide estimates. Thus such forecasts are too old to be effectively used for allocation
decisions, at least at the federal level. However, the utility of such forecasts to assist in anomaly
detection are still relevant. It would be easily possible for the FBI to not only release direct estimates of
homicides, but also provide such error intervals to better contextualize such ups and downs. This would
preempt overinterpreting minor fluctuations, whether it be by the media, politicians, or criminal justice
We do not suppose prediction modelling to serve as the only method for exploring recent
changes in criminal justice related issues, for in our own work we recognize there are limitations
involving data and analysis. One of the limitations of this study is our reliance upon official UCR data.
While homicide is likely to be detected or reported, the other violent crimes could be underreported and
introduce bias into our estimates. Also, the use of national estimates of crime may cause some errors.
The national estimates of crime rates are the result of the imputation process for incomplete information
on reported crime or missing cases. The analysis results may be less valid due to the use of data different
from true figures. Such errors can lead one to misinterpret the result that homicide increase is abnormal.
Still, the forecasting and fan chart approach offers a viable alternative method, as shown by the coverage
of the prediction intervals for not only homicides but other national level violence crime estimates here.
But its application will always have limitations. The prediction intervals are only as good as the
specified model and the data it is supplied.
Another limitation is that we rely on statistical methods in which the underlying data have no
errors themselves. The same idea of a fan chart could in principle be applied to the National Crime
Victimization Survey (Rennison, 2000), although they have estimates of the prevalence of particular acts,
not an official number. Future research may attempt to build models that not only take into account
forecast error in identifying significant shifts in crime, but also sampling error as well.
Another limitation of the work is that while we can identify if an increase is abnormal or within
the prior historical changes, we cannot directly identify a cause of those changes. So while we identify
that the 2015 homicide rate has some evidence it is anomalous compared to historical patterns, we
cannot attribute it to any particular cause, such as de-policing. We do not include any structural level
variables like past work (Gaston et al., 2019; Gross & Mann, 2017; Rosenfeld, 2016; Rosenfeld &
Wallman, 2019) that tries to tease out the effect of changing drug markets, civilian concerns of police
violence, or de-policing strategies. While incorporating different covariates may help identify different
factors that contribute to macro level homicide trends, including covariates rarely increases forecasting
accuracy (Hyndman & Athanasopoulos, 2018). Since those variables often themselves have uncertainty,
they will not necessarily improve forecasts of future crime trends. Still, this technique can be used as a
first stage test as to whether some recent shock is causing homicides to increase. If recent changes fall
within historical prediction bands, it does not offer evidence that anything abnormal is currently
occurring. One should not use minor fluctuations in homicide rates (or any crimes) to ex ante attribute
those short term fluctuations to any particular exogenous shock.
Finally, we are unable to determine if the observed abnormality in the 2015 homicide rate is a
singular outlier, or part of a structural change in the homicide trend. For several possibilities, recent
increases in macro level crime trends have not been uniform, so it is not clear if the long term crime
drop beginning in the 1990’s has plateaued, or if overall crime is starting to slowly increase in an entire
reversal from the prior crime decline. Another is that different factors over time, such as the lethality of
violence (Berg, 2019), may be causing longer term trends, but are too small to be effectively detected by
this technique. It is also the case that singular cities can disproportionately contribute to national level
homicide increases (Rosenfeld, 2016).
Even if identifying an anomalous national level change, it would not directly say whether such a
shock is relevant for the entire U.S., or just a few select jurisdictions. It is the case though that similar
analysis can be conducted at the city level to identify anomalous changes (Wheeler & Kovandzic, 2018).
But city level analyses will have more error in the forecasts than national level estimates, due to the fact
that homicide is a quite rare event. Still, we believe national level analysis are still relevant, especially
given they are often used to justify particular policy responses. Given that many cities often follow
national level trends (McDowall & Loftin, 2009), it may be better to start with national level analysis
from a monitoring perspective, and when an anomaly is detected to drill further into the data to attempt
to uncover if particular cities are driving that change.
Researchers, practitioners, media outlets, and politicians should continue to monitor violent
crime rates, comparing them to one another and within the data series before making a definitive
statement on the “trend” of any type of violence. Crime analysts employed by larger police departments
have direct access to calls for service, incident, and arrest data for their city. It is becoming more
common for police departments in big cities to make their crime data publicly available (e.g., and to update an online database each week or month, facilitating timely access by
researchers independent of the police department. Such data availability allows studies of local crime
rates, though aggregated national studies will still lag behind actual crime occurrences due to the length
of time (up to 18 months) it takes for official UCR data to be published. Given we find evidence that
homicide trends follow a random walk pattern, as do other researchers (McDowall, 2002; McDowall &
Loftin, 2005), it would suggest the overall rate may meander up-and-down for in the short run for any
particular period, and so two or three year increases are not guaranteed proof that the crime declines
observed in the prior 25 years are reversing. The fan charts we illustrate here can help identify short run
aberrations, and thus give more immediate feedback to policy makers on crime trends, without needing
to wait for years to identify long term trends.

False awakenings in lucid dreamers: How they relate with lucid dreams, and how lucid dreamers relate with them

False awakenings in lucid dreamers: How they relate with lucid dreams, and how lucid dreamers relate with them. Buzzi, Giorgio. Dreaming, Vol 29(4), Dec 2019, 323-338.

Abstract: In this article, some previously unreported findings from an old web survey about false awakenings (FAs) in 90 lucid dreamers will be discussed. FAs have been told to be frequent concomitants of lucid dreams, but objective data are lacking. In the present study, a positive correlation was found between the reported frequencies of FAs and lucid dreams, r = .51, p < .001, and 56 (62%) subjects reported experiencing habitual transitions from FAs to lucid dreams (or/and vice versa). These findings confirm previous anecdotal reports with objective data and suggest a similar neurophysiologic basis for the two kinds of experience. FAs appear to be characterized by a strong propensity of the experients to exercise a metacognitive judgment upon their state by means of reality checks (76% of respondents). Reality checkers reported that lucid dreams were a habitual termination of their FAs significantly more often than nonreality checkers (p < .001). This appears to be the first empirical datum in support of the frequently self-reported ability of lucid dreamers to turn “actively” their FAs into lucid dreams. Given the similarity between FAs and sleep paralysis in terms of possible state overlap, getting practice in performing reality checks could be a useful tool to manage some cases of recurrent sleep paralysis as well.

From False Awakenings in Lucid Dreamers. Michelle Carr. Psychology Today, Dec 29 2019.

Fifty-six subjects (62%) reported that they noticed anomalies or bizarre situations during False Awakenings — for example, details out of place or devices not working properly (e.g., light switches or digital clocks).
“Usually (my False Awakenings) start with me waking up in bed. I get up and go check on my children to see if they are sleeping. I may go into the living room or back into the bedroom ... then I go back to sleep and when I wake up for real I realize that some things were out place and that I had yet another false awakening”

Sixty-eight subjects (76%) actively tested the dream to confirm whether they were awake or asleep, and 45 claimed that they used false awakenings as a bridge to lucidity:

“...a good way of inducing lucid dreams as I often perform reality checks during False Awakening.s”
“...hold my nose and breathe through it (you can if you’re dreaming).”
“...turn something on; if it’s a dream it usually comes with mechanical failure.”