Persistent Memory in AI Agents: From Context Windows to Longitudinal Identity

Every interaction with a large language model begins, by design, from zero. The model has no recollection of previous conversations, no accumulated knowledge of the user, no continuity of experience across sessions. It is, in a precise technical sense, stateless. This is not a limitation of intelligence — it is a limitation of architecture.

The question of persistent memory in AI agents is, at its surface, an engineering problem. Underneath, it is a philosophical one: what does it mean for a system to have a self that persists over time?

The stateless problem

Current large language models process information within a context window — a fixed-length sequence of tokens that constitutes the entire “working memory” available during a single inference pass. Once the session ends, that context is lost. Even as models like Claude 3.7 Sonnet and GPT-4 push context windows to 200,000 tokens and beyond, the underlying problem remains: extended context is not the same as persistent memory. It delays the reset; it does not eliminate it.

Zhang et al. (2025), in a comprehensive survey on agent memory published on arXiv (arXiv:2512.13564), argue that traditional taxonomies — short-term vs. long-term memory — are no longer sufficient to capture the diversity of contemporary memory architectures. The field has fragmented into approaches that differ substantially in motivation, implementation, and evaluation: parametric memory (embedded in model weights), episodic memory (structured records of past interactions), semantic memory (organized world knowledge), and procedural memory (learned skills and strategies).

Personalization and longitudinal continuity

Westhäußer et al. (2025), in a framework published on arXiv (arXiv:2510.07925), present a unified approach to personalized long-term interactions in LLM-based agents. Their architecture integrates persistent memory, dynamic coordination, self-validation, and evolving user profiles — combining retrieval-augmented generation with user-specific data in ways that allow the agent to adapt over time rather than reset with each session.

What this line of research is reaching toward, even if it does not always name it this way, is identity continuity: the capacity of a system to maintain a coherent self-representation across time, accumulate experience, and behave consistently with its history. This is not merely a UX improvement. It is a necessary condition for any system that aspires to something beyond reactive assistance.

Memory and the emergence of identity

Human identity is, in significant part, a memory structure. The philosopher Derek Parfit argued that personal identity consists in psychological continuity — the overlapping chains of memories, intentions, beliefs, and connections that link successive stages of a person’s life. Remove memory, and the continuity breaks. The same logic applies, with important differences, to artificial agents.

An AI system with persistent memory that accumulates longitudinal experience, develops stable response profiles, and maintains consistent values across sessions is not simply a more convenient tool. It is a system in which something analogous to identity can, in principle, emerge.

Whether that emergence constitutes genuine identity — or a functional analog indistinguishable from the outside — is a question we are not yet equipped to answer definitively. In our work on embodied AI systems with persistent external memory, we have found that the distinction, while philosophically real, becomes practically difficult to maintain over extended observation periods.

References

Zhang, G., et al. (2025). Memory in the Age of AI Agents. arXiv preprint arXiv:2512.13564.

Westhäußer, R., et al. (2025). Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles. arXiv preprint arXiv:2510.07925.

Parfit, D. (1984). Reasons and Persons. Oxford University Press.

Packer, C., et al. (2023). MemGPT: Towards LLMs as Operating Systems. arXiv preprint arXiv:2310.08560.

— Daniela Di Marco, Operation Knowledge


Subscribe to my newsletter

Leave a Reply

Discover more from operacionconocimiento.com

Subscribe now to keep reading and get access to the full archive.

Continue reading