Friday, March 24, 2023

People may not be able to tell if they are envied by another person at a particular moment, but they know who the notoriously envious ones are among the people they have known for a longer time

Lange, Jens, Birk Hagemeyer, Thomas Lösch, and Katrin Rentzsch. 2019. “Accuracy and Bias in the Social Perception of Envy.” OSF Preprints. June 16. doi:10.31219/osf.io/8jc7x

Abstract: Research converges on the notion that when people feel envy, they disguise it towards others. This implies that a person’s envy in a given situation cannot be accurately perceived by peers, as envy lacks a specific display that could be used as a perceptual cue. In contrast to this reasoning, research supports that envy contributes to the regulation of status hierarchies. If envy threatens status positions, people should be highly attentive to identify enviers. The combination of the two led us to expect that (a) state envy is difficult to accurately perceive in unacquainted persons and (b) dispositional enviers can be accurately identified by acquaintances. To investigate these hypotheses, we used actor-partner interdependence models to disentangle accuracy and bias in the perception of state and trait envy. In Study 1, 436 unacquainted dyad members competed against each other and rated their own and the partner’s state envy. Perception bias was significantly positive, yet perception accuracy was non-significant. In Study 2, 502 acquainted dyad members rated their own and the partner’s dispositional benign and malicious envy as well as trait authentic and hubristic pride. Accuracy coefficients were positive for dispositional benign and malicious envy and robust when controlling for trait authentic and hubristic pride. Moreover, accuracy for dispositional benign envy increased with the depth of the relationship. We conclude that enviers might be identifiable but only after extended contact and discuss how this contributes to research on the ambiguous experience of being envied.


Whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question; equipping LLMs with agency & intrinsic motivation is a fascinating & important direction for future work

Sparks of Artificial General Intelligence: Early experiments with GPT-4. Sebastien Bubeck et al. Mar 22 2023. https://arxiv.org/pdf/2303.12712.pdf

Abstract: Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4 [Ope23], was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

---
For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work. With 92 this direction of work, great care would have to be taken on alignment and safety per a system’s abilities to take autonomous actions in the world and to perform autonomous self-improvement via cycles of learning. We discuss a few other crucial missing components of LLMs next.