CORE MEMORY · WHITEPAPER
COMPILING
Elephantasm Whitepaper v1.0
WHITEPAPER v1.0
A comprehensive exploration of Long-Term Agentic Memory (LTAM)—the structured cognitive substrate that gives AI agents continuity, self-awareness, and evolution over time. Beyond simple retrieval, Elephantasm transforms raw events into lasting understanding.
48%
Development Journal
Alpha·2026-01-13·ARCHITECTURE
The Dreamer: Where Memory Becomes Wisdom

The Dreamer: Where Memory Becomes Wisdom

Memory without curation is hoarding. The Dreamer is Elephantasm's background curator—a sleep-inspired loop that decays, merges, refines, and forgets, ensuring memories evolve rather than accumulate.
Alpha·2026-01-08·PHILOSOPHY
We Don't Want Conscious AI — But We'll Build It Anyway

We Don't Want Conscious AI — But We'll Build It Anyway

Notes on agency, identity, and the unavoidable future of personal agents. Once autonomy is granted, the shape of responsibility is no longer optional — it becomes destiny.
Alpha·2025-12-11·ARCHITECTURE
Memory Packs: Deterministic Context with a Point of View

Memory Packs: Deterministic Context with a Point of View

Memory Packs are Elephantasm's portable worldview—deterministic injections of identity, session flow, knowledge, and deep recall that let an Anima answer every turn as itself.
Alpha·2025-12-7·ARCHITECTURE
Identity - The Emergent Persona Layer

Identity - The Emergent Persona Layer

Identity is the interpretive kernel of Elephantasm—the persona layer that shapes how every perception becomes continuity.
Alpha·2025-12-02·ARCHITECTURE
Knowledge: Where AI Agents Stop Remembering and Start Understanding

Knowledge: Where AI Agents Stop Remembering and Start Understanding

If Events are what happened and Memories are what it meant, then Knowledge is what remains true. This is Elephantasm's abstraction engine—compressing the flux of experience into reusable understanding.
Alpha·2025-11-18·ARCHITECTURE
Memories: The Narrative Layer of Elephantasm

Memories: The Narrative Layer of Elephantasm

If Events are raw 'what happened,' Memories are what it meant—the first semantic compression where experience becomes continuity. This is where Elephantasm decides what to carry forward.
Alpha·2025-11-07·ARCHITECTURE
Events: The Atomic Units of Experience

Events: The Atomic Units of Experience

Reality is continuous, but cognition is discrete. Events are where Elephantasm transforms the unbroken flow of experience into knowable pieces—epistemic anchors that make memory traceable, provenance auditable, and meaning emergent.
Alpha·2025-10-29·ARCHITECTURE
From Philosophy to System Architecture

From Philosophy to System Architecture

Translating Elephantasm's metaphysics of memory into a deterministic system: how events metabolise into memories, lessons, knowledge, and an anima that can introspect.
Alpha·2025-10-22·PHILOSOPHY
The Nature of Memory

The Nature of Memory

Memory is not a ledger of events - it's a living narrative, dynamic and self-referential. To remember is to forget, to interpret is to rewrite. Explore why subjectivity isn't a flaw of memory, but its defining feature.
Alpha·2025-10-21·TECHNICAL
Context Isn't Everything

Context Isn't Everything

Context windows have exploded to 1M+ tokens, yet models still reset to amnesia after each session. While we celebrate breakthroughs in reasoning and speed, the real frontier is continuity: the ability for agents to remember, adapt, and evolve across time.
Research Library
report·2025-11-04
Training Proactive and Personalized LLM Agents

Training Proactive and Personalized LLM Agents

Sun et al. introduce PPP-Agent, a reinforcement learning framework that optimizes LLM agents for productivity, proactivity, and personalization via th...
Weiwei Sun et al. (arXiv:2511.02208)
Original source: arxiv.org
2.1 MB
View & Download
whitepaper·2025-10-25
A Guide to Memory in Agents

A Guide to Memory in Agents

A comprehensive exploration of Long-Term Agentic Memory (LTAM)—the structured cognitive substrate that gives AI agents continuity, self-awareness, and...
Elephantasm Team
5.9 MB
View & Download
report·2025-10-08
MemoryR1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning

MemoryR1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning

This paper presents MemoryR1, a novel approach that enhances LLM agents with the ability to manage and utilize memories through reinforcement learning...
Sikuan Yan et al. (arXiv:2508.19828)
Original source: arxiv.org
2.8 MB
View & Download