Personal Tensor Memory
Ravishankar S R
Ravishankar S R, Department of Computer Science Engineering, Independent Researcher, Chennai (Tamil Nadu), India.
Manuscript received on 25 June 2025 | First Revised Manuscript received on 21 July 2025 | Second Revised Manuscript received on 01 August 2025 | Manuscript Accepted on 15 August 2025 | Manuscript published on 30 August 2025 | PP: 1-3 | Volume-5 Issue-5, August 2025 | Retrieval Number: 100.1/ijainn.E110005050825 | DOI: 10.54105/ijainn.E1100.05050825
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Large language models (LLMs) excel at general knowledge but struggle when they must remember the preferences, profile facts, and long-term context of a specific user—especially on constrained devices. We introduce Personal Tensor Memory (PTM), a privacy-preserving add-on that assigns every user a fixed-shape matrix, which the frozen backbone can query through one additional attention head. A nightly routine— Hebbian add + decay, norm clipping, slot merge/evict, and occasional orthogonal rotation—re‑organises information inside that matrix without changing its shape or touching billions of backbone weights. On synthetic concept‑drift streams and anonymised personal‑assistant logs, PTM matches kNN‑LM perplexity while needing only 5 % of its context window, and surpasses rank‑8 LoRA under few‑shot data—all using < 8 MB per user and < 1 s daily CPU on a smartphone.
Keywords: LLM Personalisation · External Memory · Hebbian Learning · Adapter Rotation · Retrieval Augmentation.
Scope of the Article: Neural Networks