Call for Papers

We invite submissions on novel research results (theoretical and empirical), benchmarks, demos, visualizations, software frameworks and abstractions, and work-in-progress research. There are two tracks: full length submissions are 5-page papers (excluding references), tiny papers are 3-page papers (excluding references). Submissions should be made anonymously on OpenReview. The reviews will not be shared publicly.

For examples, see accepted posters at previous iterations of this workshop (NeurIPS 2023 and ICLR 2025)

Associative Memory (AM) has re-emerged as a unifying principle in modern artificial intelligence, linking classical energy-based models with contemporary architectures such as transformers, diffusion models, and memory-augmented agents. Rooted in early mathematical and neuroscientific formulations, AM provides a principled view of collective computation through attractor dynamics and energy landscapes. Recent advances have significantly expanded this framework, revealing associative retrieval as a form of attention, inference, and optimization within deep learning systems. Beyond classical recall, modern AM models support test-time regression, continual adaptation, and reasoning over structured domains such as graphs, manifolds, and probability distributions. These developments position associative memory not merely as a storage mechanism, but as a foundational computational paradigm for generation, reasoning, and adaptive intelligence in large-scale AI systems. Despite this rapid progress, research on memory, reasoning, and adaptation remains fragmented across communities spanning energy-based learning, optimization theory, neuroscience-inspired computation, generative modeling, and agentic AI. This workshop is motivated by the need for community building around associative memory as a shared mathematical and conceptual foundation. Our goal is to bring together theorists, experimentalists, and practitioners from academia and industry to develop a unified perspective on memory as the core substrate of intelligent behavior. By fostering cross-disciplinary dialogue and shared evaluation paradigms, the workshop aims to catalyze a coherent research agenda for next-generation memory-augmented and agentic AI systems.

Associative memory is defined as a network that can link a set of features into high-dimensional vectors, called memories. Prompted by a large enough subset of features taken from one memory, an animal or an AI network with an associative memory can retrieve the rest of the features belonging to that memory. The diverse human cognitive abilities which involve making appropriate responses to stimulus patterns can often be understood as the operation of an associative memory, with the memories often being distillations and consolidations of multiple experiences rather than merely corresponding to a single event.

  • Hopfield networks and Dense Associative Memories
  • Energy-based models, attractor dynamics, and score-based or diffusion-based generative models
  • Associative memory as attention, inference, or retrieval in transformers and deep architectures
  • Energy Transformers
  • Capacity, stability, convergence, and memorization–generalization tradeoffs in high-dimensional associative systems
  • Optimization dynamics, test-time training, and adaptation viewed as associative processes
  • Associative memory for generative modeling in non-Euclidean domains such as manifolds, graphs, and distributions
  • Memory-augmented architectures for agentic AI, persistent reasoning, tool use, and long-horizon decision making
  • Lifelong learning, continual adaptation, and mitigation of catastrophic forgetting via associative mechanisms
  • Multimodal and structured reasoning using associative recall across language, vision, and embodied settings
  • Neuroscience- and physics-inspired perspectives on memory, energy landscapes, and transient dynamics
  • Scalable, hardware-efficient, and biologically plausible implementations of associative memory
  • Analog and digital hardware design for associative memory (one new item to add before the benchmarks).
  • Benchmarks, evaluation protocols, and empirical studies of memory-augmented models