Friday, September 22, 2017

The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity

As a preventive measure against possible complains in the future about bias, I share this:

The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Ayanna Howard and Jason Borenstein. Science and Engineering Ethics, https://doi.org/10.1007/s11948-017-9975-2

Abstract: Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as “black teenagers” are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.

No comments:

Post a Comment