Thursday, May 2, 2019

Visual Deep Neural Networks: Trained with impoverished data, lack adaptations to attentional mechanisms, visual working memory, & compressed mental representations that preserve relevant abstractions

Comparing the Visual Representations and Performance of Humans and Deep Neural Networks. Robert A. Jacobs, Christopher J. Bates. Current Directions in Psychological Science, November 27, 2018. https://doi.org/10.1177/0963721418801342

Abstract: Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychological models. First, DNNs are currently trained with impoverished data, such as data lacking important visual cues to three-dimensional structure, data lacking multisensory statistical regularities, and data in which stimuli are unconnected to an observer’s actions and goals. Second, DNNs typically lack adaptations to capacity limits, such as attentional mechanisms, visual working memory, and compressed mental representations biased toward preserving task-relevant abstractions.

Keywords: perception, vision, artificial intelligence, deep neural networks

No comments:

Post a Comment