Publication | Open Access
Inner Monologue: Embodied Reasoning through Planning with Language Models
205
Citations
0
References
2022
Year
Artificial IntelligenceLanguage GroundingEngineeringEnvironment FeedbackPsycholinguisticsCognitive RoboticsSemanticsAction LanguageInner MonologueLanguage ProcessingEmbodied AgentNatural Language ProcessingLarge Language ModelsMultimodal LlmComputational LinguisticsRobot LearningLanguage StudiesLarge Ai ModelCognitive ScienceWorld ModelLlm-based AgentAutomated ReasoningLlms PlanningRoboticsLinguistics
Large language models have been applied to embodied planning tasks, but these problems require agents to understand available skills, their effects, and how changes map back to language, and to decide what and when to act as feedback evolves. The study examines whether LLMs can reason from natural‑language feedback alone and proposes that such feedback enables an inner monologue that improves planning in robotic control. The authors evaluate several feedback modalities—success detection, scene descriptions, and human interaction—to generate the inner monologue. Closed‑loop language feedback markedly boosts high‑level instruction completion across simulated and real tabletop rearrangement tasks and long‑horizon mobile manipulation in a real kitchen.
Recent works have shown how the reasoning capabilities of Large Language Models (LLMs) can be applied to domains beyond natural language processing, such as planning and interaction for robots. These embodied problems require an agent to understand many semantic aspects of the world: the repertoire of skills available, how these skills influence the world, and how changes to the world map back to the language. LLMs planning in embodied environments need to consider not just what skills to do, but also how and when to do them - answers that change over time in response to the agent's own choices. In this work, we investigate to what extent LLMs used in such embodied contexts can reason over sources of feedback provided through natural language, without any additional training. We propose that by leveraging environment feedback, LLMs are able to form an inner monologue that allows them to more richly process and plan in robotic control scenarios. We investigate a variety of sources of feedback, such as success detection, scene description, and human interaction. We find that closed-loop language feedback significantly improves high-level instruction completion on three domains, including simulated and real table top rearrangement tasks and long-horizon mobile manipulation tasks in a kitchen environment in the real world.