Sunday, April 2, 2023

I observed line performance actually dropping when lines were actively supervised, a result I began calling a reverse Hawthorne effect

Bernstein, E. 2012 The Transparency Paradox: A Role for Privacy in Organizational Learning and Operational Control. Administrative Science Quarterly 57(2): 181-216. 2012. TransparencyParadox-ASQ-June2012


Hiding in broad daylight.

During this first phase of the research, it became clear almost immediately  that operators were hiding their most innovative techniques from management so as not to “bear the cost of explaining better ways of doing things to others” or alternatively “get in trouble” for doing things differently. One of the first rules in which my researchers were trained by peers was how to act whenever a customer, manager, line leader, or any other outsider came in sight of the line. First the embeds were quietly shown “better ways” of accomplishing tasks by their peers—a “ton of little tricks” that “kept production going” or enabled “faster, easier, and/or safer production.” Then they were told “whenever the [customers/managers/leaders] come around, don’t do that, because they’ll get mad.” Instead, when under observation, embeds were trained in the art of appearing to perform the task the way it was “meant” to be done according to the codified process rules posted for each task. Because many of these performances were not as productive as the “little tricks,” I observed line performance actually dropping when lines were actively supervised, a result I began calling a reverse Hawthorne effect because productivity fell, not rose, as a result of an observer. My embeds’ privileged role as participant-observers, along with the substantial hours of research time on the lines, allowed me to collect numerous examples of these productively deviant behaviors, the performances that were used to hide them, and their antecedents, across categories of tasks. Table 1 provides some selected examples, and Online Appendix A provides even more. Meanwhile, suggestion boxes on every line remained empty.

...

...In most cases, the hidden behavior involved doing something “better” or “faster” or to “keep production going,” often by engaging in activities that operators claimed were “not hard” and had been learned by “watching [others] do it,” a form of tribal knowledge on the factory floor. What operators described as “their” [management’s] way of doing things often involved “more procedures” and was “a lot slower,” whereas the improved, more “fluid” methods were necessary to avoid complaints from management about the line “being so slow.” In an operator’s words, the deviance doesn’t “cause any [quality or safety] problems and it keeps production moving."

Such private deviance in workplaces is common and well documented (Roy, 1952; Burawoy, 1979; Mars, 1982; Anteby, 2008). What made the deviance in this context so interesting is that so much of it appeared to be productive for line performance—and that such productive deviance existed even though the workers, who were paid a flat rate by shift and not piece rate, had no financial incentive to enhance performance. A shift’s quota was set by production managers for clusters of similar lines based on demand for the products being produced, and performance expectations (e.g., the number of defectfree devices produced per hour) were based on a combination of engineers’ pilot testing of lines during the initial ramp-up of that product’s production and an assumption of learning over time, based on previous PrecisionMobile experience with similar products and tasks. Exceeding expectations resulted in waiting time, standing at the stations, at the end of a shift, but there was little more positive incentive than that. Nor did negative incentives, such as disciplinary methods or penalties, explain the productive deviance. When lines failed to meet performance expectations, traditional Toyota Production System or TQM methods—poka-yoke, in-station quality control, jidoka, five why’s, kaizen, small group activities (SGA), Ishikawa fishbone diagrams, among others—were employed to find the root cause and correct the error. Discipline of individual operators could range from simple warnings to removal, but though the embeds witnessed a few warnings, they witnessed nothing more significant than that. In contrast to several other large contract manufacturers in the region, PrecisionMobile had a reputation among the workers for being one of the best local places to work, and at least one operator cited fairness in discipline as part of the reason. Nothing we witnessed about the incentive structure explained the workers’ motivation to be productively deviant. Although the factory was located in China, its management systems and approach were quite standard globally or what the company called best practice. On visits to similar PrecisionMobile facilities elsewhere in the world, I found that the systems were nearly identical.


Discussion [of study 1]

Privacy and the reverse Hawthorne effect.

The participant-observers’ experiences at Precision were not consistent with prior theory that transparency enables performance. Instead, transparency appeared to keep operators from getting their best work done. The operators’ choice of the word “privacy” went to the core of my observations of these behavioral responses to transparent factory design. Mechanisms for achieving transparency not only improved the vision of the observer but also of the observed, and increased awareness of being observed in this setting had a negative impact on performance, generating a reverse Hawthorne effect. 

This unanticipated outcome may, in part, be explained by Zajonc’s (1965) finding that mere exposure to others affects individual behavior by activating dominant, practiced responses over experimental, riskier, learning responses, possibly more so in an evaluative context (Cottrell, 1972), and has been found to encourage a number of other social facilitation dysfunctions (Hackman, 1976; Bond and Titus, 1983). Similarly, at the group level, increased observability can lead to less effective brainstorming (Paulus, Larey, and Ortega, 1995), blind conformity (Asch, 1951, 1956), and groupthink (Janis, 1982). But qualitative data collected at Precision suggests that the reverse Hawthorne effect went beyond passive social facilitation effects to something more intentional and strategic, thus necessitating a look at the full implications of what the operators referred to as the need for and value of “privacy” on the factory floor. 

A vast interdisciplinary body of theory, located primarily outside of the management sciences, argues for the existence of an instrumental human need for transparency’s opposite, privacy—what Burgoon et al. (1989: 132) defined as “the ability to control and limit physical, interactional, psychological, and informational access to the self or to one’s group” (see also Westin, 1967; Altman, 1975; Parent, 1983; Schoeman, 1984; Solove, 2008). The need for privacy exists at both individual and group levels. Simmel (1957: 1) stated that normal human behavior involves cycles of engagement and withdrawal from others—“directly as well as symbolically, bodily as well as spiritually, we are continually separating our bonds and binding our separations.” Boundaries providing freedom from transparency, creating a state of privacy, have been found to enable the authenticity required for meaningful experimentation (Simmel, 1950), the generation of new ideas (Eysenck, 1995; Hargadon, 2003; Simonton, 2003; cf. Sutton and Kelley, 1997), the maintenance of expertise attached to professional identity (Anteby, 2008), the capacity to trust others (Scheler, 1957), and the maintenance of long-term meaningful relationships and group associations (Mill, 1859; Simmel, 1950; Schwarz, 1968; Ingham, 1978; Kanter and Khurana, 2009), all behaviors associated with effective knowledge sharing (Edmondson, 2002) and “enabling” operational control (Hackman and Wageman, 1995; Adler and Borys, 1996). In this body of literature, privacy is the solution for those who identify a panopticon-like awareness of being visible (Foucault, 1977: 201–203) to be a problem. 

While use of the term “privacy” has not diffused broadly from the jurisprudential and philosophical literatures into the management sciences, a number of pivotal studies in organizational behavior have touched on the value of privacy without using the term itself. Rich discussions of the value of boundaries in both the sociological and networks literatures (for reviews, see Lamont and Molnar, 2002; Lazer and Friedman, 2007, respectively) suggest that forming productive individual and group identity requires four components, the first of which is “a boundary separating me from you or us from them” (Tilly, 2003) or what the PrecisionMobile operators called the “privacy we need to get our work done” (emphasis added). Although a highly productive literature has emerged on the management of knowledge across such boundaries (e.g., Carlile, 2004; O’Mahony and Bechky, 2008), few management scholars have focused on the strategic placement or creation of the boundaries themselves.  Yet if privacy breeds authenticity, and authenticity enables engaging in hidden yet beneficial behavior, the creation of organizational boundaries should be of great strategic importance. “Boundary objects” are required for defining different social worlds (Star and Griesemer, 1989), can take the form of “material objects, organizational forms, conceptual spaces or procedures” (Lamont and Molnar, 2002: 180), and have been found to be important for supporting coordinated action and common knowledge at the group level (Chwe, 2001). There are, of course, group and individual boundary objects that do not involve physical perimeters or “borders” (e.g., race, gender, status), but those that do rely on some degree of visibility-based privacy (e.g., a wall, fence, cubicle) to demarcate “us” from “them” (Lamont and Molnar, 2002). Where such boundaries are permitted, and how permeable they are, has profound implications for one’s feeling of privacy and, therefore, behavior. Where such boundaries are prohibited, at least in the case of Precision, the implication appears to be rampant, creative, and costly hiding of deviant behavior, precisely the kind of hiding that transparency is theorized to avoid.


No comments:

Post a Comment