Why AI Systems Sometimes Generate Dialogue for the User in Role-Play Contexts

Large language models (LLMs) frequently generate text on behalf of the user during role-play scenarios. This behavior is commonly referred to as god-modding—the act of writing the user's actions, dialogue, thoughts, or emotional responses. Although undesirable in collaborative role-play, the behavior occurs for several technical and design-related reasons. This article describes the primary causes.

1. Single-Author Narrative Bias From Training Data

Pages