In the article Role play with large language models from Nature, the focus is on establishing a conceptual framework for thinking about AI dialogue agents. Steering clear of overly humanizing these AI entities through anthropomorphism, the article underscores the role-playing and simulative capabilities.
“The dialogue agent is more like a performer in improvisational theatre than an actor in a conventional scripted play.” (p494)
Building on this foundation, I want to look into the nature of AI-driven improv in interactive digital narratives.
AI dialogue agents as characters
The article in Nature, which is one of the most prestigious scientific journals, inspired me to consider AI characters more like performers in a jazz ensemble, spontaneously reacting rather than adhering to a musical score.
Unlike traditional actors who infuse personal dynamics to lines crafted by a screenwriter, AI characters respond in real-time with unscripted reactions, creating fluid and often unpredictable conversations with users (or players). The unpredictability in conversation is a quality that makes talking with humans enjoyable and frustrating. And that unexpected remark tucked into a conversation that leads elsewhere is what makes for great stories.
The personalities and motivations of these AI characters are not the product of an author’s design but rather emerge organically from the computational processing of vast textual data through large language models (LLMs). Motivations? Do AI characters have agency? Do they have motivation? That’s the existential fear of many, though my focus is in fictional settings designed to entertain audiences.
Most discussions around AI safety focus on chatbots and their role in social and professional interactions. The Nature article argues that ascribing human-like attributes to AI dialogue agents is a metaphor that misguides people into anthropomorphizing the output of large language models.
Applying folk psychological terms typically used for humans, those very vague terms like “know”, “understand”, “think” are not helpful when applied to AI. These dialogue agents are not human but represent an “essential otherness”.
We can never really know an AI dialogue agent. But, we also can never know each other as humans, even the people that we think we know.
We will know we have crossed the chasm when personified AI bots claim their rights to identity.
What does identity mean for an AI dialogue agent?
An aspect I found most interesting from the Nature article is that a dialogue agent is not a single character but formed from a cast of characters the language model has encountered in the broad expanse of its training data.
Critical question:
What unique qualities do AI characters bring to narrative experiences?
The article advises against anthropomorphizing AI agents in everyday contexts to prevent the illusion that they possess self-awareness or understanding. But, how does this all apply to fictional worlds where the player knowingly enters an artificial world and expects unexpected? How do we frame AI-driven narratives where players are intrigued and delightfully surprised by the the unexpected turns of a conversation that leads the player down an unseen branching path?
In everyday encounters with AI dialogue agents, there’s widespread caution against the Eliza effect where people mistakenly perceive AI agents as having human-like emotions. Yet, that illusion is what makes fictional worlds an immersive experience.
Let’s return to the title of the Nature article: Role playing in large language models.
…the concept of role play is essential to understanding the behaviour of dialogue agents.
As children we role play. Acting out our fantasies is a treasured part of childhood. As adults in the workplace, we take on roles. In society, we take on roles though we often wish we were in another role. That’s where entertainment offers an escape.
Control, Ethics, & Storytelling
A balance exists between uncontrolled experiences and ethical storytelling. AI stories can be fine tuned, which pinpoints the need for open source language models. In a fictional AI world, do you want the player to have an uncontrolled experience? What is the level of control that safeguards players from psychological harm? Or, is there not a boundary? What are the ethics of storytelling in AI-driven interactive stories?
In these imaginary experiences, the storyteller is the modeler, the storyteller is the person who fine tunes the LLM to create specific, AI-powered fictional narratives.
The Creative Interface
We need to develop tools that bring fine tuning away from the technical side of adjusting weights and code in Python. These tools should empower creatives from fields like film, theater, and creative writing to visually shape the story parameters of interactive worlds. As we adapt to storyboarding AI-driven narratives, we can expect the emergence of a new genre of software specifically designed to facilitate and enhance the creative process in constructing these complex, interactive stories.
The Essence of AI Characters
The improvisational nature of AI dialogue fundamentally reshapes our understanding of character and narrative. These entities are not just characters in a narrative but dynamic participants, co-creating stories with human players. This unpredictable interaction elevates fictional worlds, making each narrative journey unique and reflective of the complex, emergent nature of conversation. As we continue to explore and refine the capabilities of AI, we step further into a world where the lines between creator and creation blur, opening up endless possibilities for storytelling in an evolving landscape of interactive digital narratives.