Thursday, April 28, 2022

"Rather than discriminating against women who run for office, voters on average appear to reward women," "the average effect of being a woman (relative to a man) is a gain of approximately 2 percentage points"

What Have We Learned about Gender from Candidate Choice Experiments? A Meta-Analysis of Sixty-Seven Factorial Survey Experiments. Susanne Schwarz and Alexander Coppock. The Journal of Politics Volume 84, Number 2, April 2022.

Abstract: Candidate choice survey experiments in the form of conjoint or vignette experiments have become a standard part of the political science toolkit for understanding the effects of candidate characteristics on vote choice. We collect 67 such studies from all over the world and reanalyze them using a standardized approach. We find that the average effect of being a woman (relative to a man) is a gain of approximately 2 percentage points. We find some evidence of heterogeneity across contexts, candidates, and respondents. The difference is somewhat larger for white (vs. black) candidates and among survey respondents who are women (vs. men) or, in the US context, identify as Democrats or Independents (vs. Republicans). Our results add to the growing body of experimental and observational evidence that voter preferences are not a major factor explaining the persistently low rates of women in elected office.

Almost half of the participants failed the Turing test, being unable to convince the other side that they were human

Would You Pass the Turing Test? Influencing Factors of the Turing Decision. Adrienn Ujhelyi, Flora Almosdi, Alexandra Fodor. Psychological Topics, Vol. 31 No. 1 (2022). Apr 2022.

Abstract: We aimed to contribute to the emerging field of human-computer interaction by revealing some of the cues we use to distinguish humans from machines. Maybe the most well-known method of inquiry in artificial intelligence is the Turing test, in which participants have to judge whether their conversation partner is either a machine or human. In two studies, we used the Turing test as an opportunity to reveal the factors influencing Turing decisions. In our first study, we created a situation similar to a Turing test: a written, online conversation and we hypothesized that if the other entity expresses a view different from ours, we might think that they are a member of another group, in this case, the group of machines. We measured the attitude of the participants (N = 100) before the conversation, then we compared the attitude difference of the partners to their Turing decision. Our results showed a significant relationship between the Turing decision and the attitude difference of the conversation partners. The more difference between attitudes correlated with a more likely decision of the other being a machine. With our second study, we wanted to widen the range of variables and we also wanted to measure their effect in a more controlled, systematic way. In this case, our participants (N = 632) were exposed to an excerpt of a manipulated Turing test transcription. The dialogues were modified based on 8 variables: humour, grammar, activity, the similarity of attitude, coherence, leading the conversation, emoji use, and the appearance of the interface. Our results showed that logical answers, proper grammar, and similar attitudes predicted the Turing decisions best. We also found that more people considered mistaking a computer for a human being a bigger problem than vice versa and this choice was greatly influenced by the participants’ negative attitudes towards robots. Besides contributing to our understanding of our attitude toward machines, our study has also shed light on the consequences of dehumanization.

Keywords: Turing test, artificial intelligence, attitude, social psychology