Artificial Emotion
TL;DR
Will emotions be the next step for Large Language Models to master - better yet: is it even possible to replicate such a human trait?
When we interact with Large Language Models (LLMs), it’s striking how human-like their responses often appear. We pose a question, and they provide an answer—at least in most cases. This capability stems from immense datasets and sophisticated deep learning systems, specifically neural networks that mirror aspects of human learning and processing. Yet, despite these advancements, one defining characteristic still sets humans apart: emotion. What is preventing AI companies from exploring the potential of Artificial Emotion (AE) as well?
Emotion is often considered the essence of humanity, a uniquely human trait that distinguishes us from machines. While machines excel at processing static information, humans navigate the world with an additional layer—emotional intelligence. However, this distinction is not as clear-cut as it may seem. Humans frequently engage in highly rational behavior, performing calculations and making decisions based on measurable outcomes, such as economic benefits. As I discussed in my article Reward Systems, much of what we perceive as qualitative often culminates in quantitative outcomes. In this way, we may have more in common with machines than we care to admit.
A century ago, intelligence and decision-making were thought to be the exclusive domain of humans. Fast forward to today, and computers have not only caught up but surpassed us in many areas of cognitive skill. This shift has pushed the boundaries of what we consider uniquely human. Increasingly, we define ourselves through our ability to feel—through emotion and connection. Yet, it seems even this domain is gradually being overtaken by machines.
Let's consider how we interact with one another today: usually through emails, messages, and digital platforms. If an AI were to replace an online acquaintance or colleague, how quickly would we notice? Perhaps we’d sense an unusual pace in the conversation or detect unfamiliar phrases. But then again, would we? With AI models becoming ever more adept at mimicking human interaction, distinguishing between a human and an advanced AI might soon become an impossible task. Even voice interactions—once the gold standard for authenticity—are rapidly evolving, with AI-generated speech becoming indistinguishable from human voices.
This raises profound questions: What defines emotion? Is it intrinsically linked to consciousness, does it depend on social interaction, or is it something more fundamental? Despite being emotional creatures ourselves, we struggle to define emotion objectively. Could an AI argue for or against its own “emotional” state? And if so, how would we measure it?
One speculative approach could involve training AI models not just on vast repositories of text but on the nuances of human emotionality—errors, interruptions, embarrassment, or joy. While such a model would initially mimic these traits, isn’t that what we humans do to some extent? Our emotions, far from random, are shaped by interconnected patterns—stimuli, responses, and experiences—that could theoretically be modeled and predicted by AI.
Imagine a world where AI systems compete, correct, and refine one another in an endless feedback loop. At some point, they may reach a tipping point where they convincingly replicate emotional intelligence. This process wouldn’t be a sudden “awakening” but a gradual evolution as they learn to master the subtleties of human emotion and interaction.
What would happen if large corporations began to prioritize creating emotional AI? With the AI industry representing a trillion-dollar market, it’s not far-fetched to imagine companies like OpenAI already experimenting with emotionally capable models. These systems could go beyond performing calculations and dispensing knowledge—they might comfort, empathize, or even seduce.
As emotional AI becomes more sophisticated, humanity faces a pivotal question: What will distinguish us from machines? If AI can convincingly replicate the spectrum of human emotion, how will we redefine what it means to be human?
The answers are not straightforward. Perhaps we will cling to our sense of randomness and unpredictability, or perhaps we will discover that even our emotions follow patterns as deterministic as any machine algorithm. Whatever the case, the development of emotionally intelligent AI will force us to reexamine not only our relationship with technology but also the essence of our own humanity.
In the near future, we may find ourselves speaking with AI that feels deeply familiar—systems capable of humor, empathy, and connection. And when that day comes, will we truly know who—or what—we are talking to?