Tuesday, December 17, 2019

We overestimate our performance when anticipating that we will try to persuade others & bias information search to get more positive feedback if given the opportunity; this confidence increase has a positive effect on our persuasiveness


Strategically delusional. Alice Soldà, Changxia Ke, Lionel Page & William von Hippel. Experimental Economics, Dec 16 2019. https://link.springer.com/article/10.1007/s10683-019-09636-9

Abstract: We aim to test the hypothesis that overconfidence arises as a strategy to influence others in social interactions. To address this question, we design an experiment in which participants are incentivized either to form accurate beliefs about their performance at a test, or to convince a group of other participants that they performed well. We also vary participants’ ability to gather information about their performance. Our results show that participants are more likely to (1) overestimate their performance when they anticipate that they will try to persuade others and (2) bias their information search in a manner conducive to receiving more positive feedback, when given the chance to do so. In addition, we also find suggestive evidence that this increase in confidence has a positive effect on participants’ persuasiveness.

Keywords: Overconfidence · Motivated cognition · Self-deception · Persuasion · Information sampling · Experiment

4 General discussion and conclusion

In the current research we tested the hypothesis that overconfidence emerges as a
strategy to gain an advantage in social interactions. In service of this goal, we conducted
two studies in which we manipulate participants’ anticipation of strategic
interactions and also the type of feedback they receive.
    In our design, participants undertake both a Persuasion Task and an Accuracy
Task in all treatments. By switching the order of these tasks, we can manipulate
participants’ goals (being accurate vs. persuasive). Because they were not aware of
the nature of the second task when undertaking the first task, we prevent participants
from engaging in a cost-benefit analysis between the two goals. However, we
acknowledge that this choice of design has its own limitations.
First, self-deception might be possible in between the Accuracy and Persuasion
Task in the Accuracy-first treatments. Because we did not elicit beliefs again after
the Persuasion Task in the Accuracy-first treatments, we cannot rule out this possibility
directly. However, there is empirical evidence showing that the way people
interpret information tends to be sticky. For example, Chambers and Reisberg
(1985) presented participants with the famous duck/rabbit figure, which could be
interpreted as either of these animals. They found that once participants arrived at
an initial interpretation that it was a duck, they were unable to re-interpret it as a
rabbit without seeing it again. In the same manner, our hypothesis was established
on the expectation that once an (accurate) belief is formed, it is “on record”. It can
therefore not be consciously ignored by participants (even if they have incentives
to form overconfident beliefs in the next task). Hence, without additional data, participants
would not be able to re-construe their beliefs easily in our Accuracy-first
treatment after the Accuracy Task. In contrast, if a participant does not have a prior
accurate belief “on record”, it may be easier to interpret information in a self-serving
manner. Similarly, we conjecture that once an inflated belief has been formed in
the Persuasion-first treatment through motivated reasoning, it is also hard to “debias”
it, even though the subsequent Accuracy Task required them to form the most
accurate beliefs. There is no obvious reason to believe that participants were able to
easily inflate beliefs (after forming well-calibrated beliefs in Accuracy Task) later in
the Persuasion Task, but unable to easily deflate the overconfident beliefs (formed
in the Persuasion Task) in the subsequent Accuracy Task. Our experimental results
can be seen as justifying our assumptions ex-post, because we would have not found
any treatment difference in belief elicitations if participants were able to adjust their
beliefs flexibly depending on the incentives they were given in each task.
Second, the process of writing an essay in the Persuasion Task could lead participants
to form inflated self-assessment of their performance, even in the absence of
any self-deception motives. While there is evidence showing that self-introspection
may lead to overconfident self-assessment (Wilson and LaFleur 1995), Sedikides
et al. (2007) find that written self-reflection actually decreases self-enhancement
biases and increases accuracy.34 If the writing task made it harder for the participants
to form inflated beliefs, the treatment effect identified in the Self-Chosen Information
condition might be underestimated. On the contrary if the writing task helped
them form inflated beliefs, the effect size measured in the Self-Chosen Information
condition might be overestimated. However, if the Persuasion Task itself inflated
self-beliefs, we should have observed a significant treatment (Persuasion-first vs.
Accuracy-first) difference in overconfidence in the No Information condition. The
fact that it is not the case can be seen as tentative evidence that even if the Persuasion
Task itself could inflate self-beliefs, this effect is unlikely to be big enough
to undermine the main effect we have identified in the Self-Chosen Information
condition.
Our findings from both studies support the idea that self-beliefs respond to variations
in the incentives for overconfidence. In our experiments, participants were put
in situations where they could receive higher payoffs from persuading other players
that they performed well in a knowledge test. We observe that their confidence
in their performance increased in such situations. Consistent with the interpretation
that overconfidence is induced by strategic motivated reasoning, we observe that
when given the freedom to choose their feedback, participants who were motivated
to persuade chose to receive more positive information. This choice, in turn, helped
them form more confident beliefs about their performance. Participants holding
higher beliefs tend to be more successful at persuading the reviewers that they did
well through a written essay, particularly in the laboratory study.
These results support the hypothesis that people tend to be more overconfident
when they expect that confidence might lead to interpersonal gains, which helps to
explain why overconfidence is so prevalent despite the obvious costs of having miscalibrated
beliefs. Future research should investigate whether the type of interpersonal
advantage observed in the context of this experiment can also be observed in
different strategic contexts (e.g. negotiation, competition).

No comments:

Post a Comment