Wednesday, September 8, 2021

Systematic Bias in the Progress of Research: The authors analyze the extent to which citing practices may be driven by strategic considerations

Systematic Bias in the Progress of Research. Amir Rubin and Eran Rubin. Journal of Political Economy, Volume 129, Number 9, September 2021. https://www.journals.uchicago.edu/doi/10.1086/715021

Abstract: We analyze the extent to which citing practices may be driven by strategic considerations. The discontinuation of the Journal of Business (JB) in 2006 for extraneous reasons serves as an exogenous shock for analyzing strategic citing behavior. Using a difference-in-differences analysis, we find that articles published in JB before 2006 experienced a relative reduction in citations of approximately 20% after 2006. Since the discontinuation of JB is unrelated to the scientific contributions of its articles, the results imply that the referencing of articles is systematically affected by strategic considerations, which hinders scientific progress.

Alex Tabarrok comments Strategic Citing - Marginal REVOLUTIONRubin and Rubin have a unique test of this behavior. For administrative reasons, the Journal of Business, a top journal in finance, stopped publication in 2006. Thus, after 2006, there were fewer strategic reasons to cite JOB papers even though the scientific reasons to cite these papers remained constant. The authors test this by matching articles in the JOB with articles in similar journals published in the same year and having the same number of citations in the two years following publication–thus they match on similar articles with a similar citation trajectory. What they find is that post-2006 the citation count of the JOB articles falls substantially off the expected trajectory. [graph]

The finding is robust to controlling for self-citations, own-journal citations, and a variety of other possibilities. The authors also show that deceased authors get fewer citations than matched living authors. For example, living Nobel prize winners get more citations than dead ones even when they were awarded the prize jointly.

[...]

---

Discussion: Additional implications of the results

In this section, we discuss implications driven by the parallels that exist between academic research and firm innovation. First, citations of patents may also be subject to strategic citations (of different sorts), which requires caution in inferences made in innovation studies. Second, we suggest that if authors of academic studies were to include more information on references cited (as done in patent applications), it could potentially benefit academic research and help reduce adverse citing practices. The finance literature has recently seen a growth in studies devoted to innovation (Lerner and Seru, 2017). Most researchers use two types of proxies to measure the innovation output of a company: the number of patents it is granted (e.g., in a given year) and the number of citations its granted patents receive following their approval.32 The disadvantage of the former proxy is that not all patents are of similar quality, so the latter is widely considered the better proxy for the scientific contribution of the firm.33 In the literature, patent citation counts are most often considered an (exogenous) outcome determined by the innovation of the firm or its CEO. However, citation counts of patents may be affected by strategic considerations of the firms citing them. Consider, for example, the relation between the decision to go public and the firm’s future innovation (Acharya and Xu, 2017; Bernstein, 2015). Once a firm becomes public, it is more visible, has more resources, and is likely to be serviced by more competent attorneys. It is possible that these facts may lead its competitors to cite the public firm’s patents more often (compared to its pre-IPO period), because after its IPO, the company is more likely to be capable of suing others for violating its intellectual property rights. Hence, if the researcher observes a higher level of citation counts in the post-IPO period, it may be due to not only a higher level of innovation in the post-IPO period but also to a change in the citing behavior of its competitors. Similarly, citing practices may change after a merger not only because of synergies (Bena and Li, 2014) but also because former rivals become cooperators, which may alter the strategic citing behavior. There is also evidence that patents of firms with overconfident CEOs obtain more citations (Hirshleifer, Low, and Teoh, 2012). It would be interesting to learn the extent to which the citations differ due to these CEOs’ preference to engage in risky innovations and the extent to which competing firms change their citing behavior because they are more wary of overconfident CEOs’ aggressiveness, which may lead to prosecution in courts. The strategic citing behavior that we uncover seems to be facilitated by the difficulty associated with monitoring it, as more trivial, easy to monitor, agency related citations, as in the case of citing editors' papers, do not seem to be pervasive in the data (see the appendix B analysis). As such, adverse citing practices of top-tier publications can benefit by borrowing from the higher level of resolution in information that currently exists in patent applications. References of patents are classified as either provided by the inventor (firm) or by the examiner of the patent. If one wants to follow the knowledge trail of the innovation process, only the inventors’ citations matter, because the examiners’ citations are added only ex-post, after the patent was actually filed (Alcacer and Gittelman, 2006). In academic research, the situation is similar in that cited references are not equally important for a given study. Some of the cited papers are building blocks for arguments, some yield similar conclusions, and some provide opposing interpretations. Most importantly, some papers overturn a previous result because of a possible mistake or an overlooked fact stated in that previously published paper. Similar to patent citation categorization, it could be helpful if academic authors are required to classify their references in terms of the way they were used in their research. A recent paper by Catalini, Lacetera, and Oettl (2015) suggests that even a simple characterization of references in terms of whether they are cited based on their contributions or flaws can increase the field’s understanding of the merits of research articles. It is possible that if authors were to indicate their perception of their references’ categories, the relevance of the cited work would become clearer, and consequently, the academic research process would improve. A reference categorization process should reduce the tendency of authors to engage in agency citations, and monitoring of the classification may become one of the important tasks of referees. Related to this, it may be worthwhile to provide some descriptive information about the references, such as the fraction of top-tier articles in the list (a high fraction may be indicative of a adverse citing practices) and the number of cases in which a reference is a sole contributor to a particular point (possible evidence of negligence of others). Finally, based on our findings of increased agency citations as the number of authors increase, it may be beneficial to require the identification of the author that is responsible for the integrity of the reference list so that it relates to the appropriate previous work. For example, it may be stated that the corresponding author is the responsible entity for this issue.


 32 Kogan et al. (2017) provide evidence that a measure of market reaction to patents is able to better explain economic growth stemming from the patent than citation counts (e.g., Moser, Ohmstedt, and Rhode, 2018; Abrams, Akcigit, and Popadak, 2013). One possibility for this is that strategic citations distort the citation count measure from reflecting a patent’s scientific value.

33 Note that in academic research, the number of publications (analogous to the number of patents) is often perceived as a poor measure of an author’s contribution, and measures such as h-10 (Google Scholar) ignore publications with no citations. This raises the question of whether the benefits of having two measures for robustness, as commonly done in the innovation literature, outweigh the costs of a noisy measure that can yield different results. In fact, one could use the differences between the two measures for a better identification of the strategic aspects of the innovation process. For example, it is known that firms may issue a patent not to open a new field (which tends to lead to future citations) but rather as a boundary of scope to prevent others from pursuing inventions in a certain area. The difference between the two measures could potentially proxy for such a tendency.


No comments:

Post a Comment