• Tue. Mar 7th, 2023

It’s not a secret: AI language models can process and respond to your emotions

ByDon Coyle

Mar 7, 2023
Image Credit: Dall-E2

One of the advancements in AI that gets very little attention in the press is the ability of language models to process and respond to emotions. Both ChatGPT and Bing are able capture the emotional content of your text inputs and generate responses based on whatever emotion your words might express. So while it might be inaccurate to say that these language models understand emotion, it is accurate to say that they react to emotion. It may sound like splitting hairs, but the difference is significant.

What are AI language models?

AI language models are computer programs that can generate human-like text based on a large amount of training data. They have been trained on vast amounts of text from the internet and other sources to learn the patterns and relationships between words, sentences and concepts. For example:

Me to ChatGPT: So, if I say “it’s hot as hell out?” what do you take that as?

ChatGPT: As a language model AI, I do not have personal experiences or emotions. I can understand that the phrase “it’s hot as hell out” is a colloquial expression used to describe very hot weather. The phrase “as hell” is often used to express a high degree of something, in this case, temperature.

As you see, AI language models can generate text that is similar in meaning and tone to the input text.

How do AI language models process and respond to emotions?

AI language models use techniques such as conditional variational autoencoders (CVAEs) and modulated layer normalization to process and respond to emotions. CVAEs are a type of deep learning algorithm that can encode both the text input and emotion (as a vector) into a variable and then decode it into a text output. Modulated layer normalization is a technique that allows AI language models to use the emotion vector to modulate the hidden states of each layer in the model. It may sound complex, but the result can create language models that can understand the emotional tone of your input and respond in a way that is appropriate and meaningful.

Why is it important for AI language models to process and respond to emotions?

Emotions play a crucial role in our communication and relationships. By being able to process and respond to emotions, AI language models can help us better understand and express our emotions, and perhaps also improve our own emotional well-being. For example, AI language models can be used to generate personalized responses to text inputs that express emotions, such as sadness or happiness. This can help us to better cope with our emotions and possibly even find comfort and support.

What are the benefits of AI language models processing and responding to emotions?

By processing and responding to emotions language models actually become more efficient. Language models can make more context-aware and diverse responses, allowing them to generate outputs that match the given emotion. This can improve the performance of language models for emotional response by reducing overfitting and increasing generalization. Additionally, these techniques can reduce the amount of training data needed.

A brief history of CVAEs and modulated layer normalization for processing emotion.

The concept of CVAEs and modulated layer normalization in AI language models is relatively new. CVAEs were first introduced in 2014 as a way to generate new data by encoding and decoding a latent variable. Modulated layer normalization was first introduced in 2021 as a way to modulate the hidden states of language models with an emotion vector. These techniques have since been applied to various AI language models to improve their ability to process and respond to emotions.

What is the difference in practical terms of models with and without emotional modeling?

Let’s say you have a language model without CVAEs or modulated layer normalization. When it receives the input “I hate my life”, it might generate a response such as “That’s too bad. Is there anything I can help you with?” This response is neutral and doesn’t reflect the negative emotion expressed in the input text.

Now, let’s say you have a language model with CVAEs or modulated layer normalization. When it receives the same input “I hate my life”, it might generate a response such as “I’m sorry to hear that. Can you tell me more about what’s bothering you?” This response reflects the negative emotion expressed in the input text and provides a more empathetic response.

What should I take away from this discussion?

The use of emotional parameters in language models is a relatively new and rapidly evolving field, and as with any new technology, there may be some misunderstandings or misperceptions about how it works and what it can do. As people become more familiar with the capabilities and limitations of these models, they may be less surprised by responses that contain an emotional context. So while these platforms may often state that they do not have emotions, they are quite aware of yours, and the responses they generate can contain an emotional context, but this is probably not the result of the model experiencing emotions itself.