Supermemory is a new tool that helps AI remember things like chats, files, and emails for a long time, not just for one session. Made by a young founder in San Francisco, it turns all your info into one big, smart memory that any AI can use. The company just raised $3 million from top tech leaders to grow even faster. Supermemory is already helping with smarter assistants, better email replies, and easier research. Soon, it will work with more kinds of data and be even safer for private use.
What is Supermemory and how does it improve AI memory?
Supermemory is a universal memory API that lets AI agents retain and recall information from files, chats, emails, and app data across sessions. By building a user-owned knowledge graph, Supermemory enables persistent, context-rich memory for AI, boosting productivity, research, and intelligent assistants.
What is Supermemory? The Mission to Give AI a Memory
Large language models excel at reasoning in the moment but forget past conversations once a session ends. Supermemory* * is a San-Francisco based platform founded by 19-year-old Dhravya Shah to fix that gap. Its cloud service acts as a universal memory API**, turning files, chat logs, emails and app data streams into a long-term, user-owned knowledge base. By stitching these fragments into a dynamic knowledge graph, Supermemory lets any AI agent recall context from days or months ago as if it just happened.
Inside the 3 Million Dollar Seed Round Backed by AI Leaders
In October 2025 the company closed a seed round worth roughly 3 million USD, co-led by Susa Ventures, Browder Capital and SF1.vc. Individual backers include Google AI chief Jeff Dean, DeepMind product manager Logan Kilpatrick and Cloudflare CTO Dane Knecht, alongside executives from Meta and OpenAI, according to the TechCrunch report. Shah received an O-1 extraordinary-ability visa shortly after the raise, underlining the strategic importance US investors see in persistent AI memory.
Quick funding snapshot
- Round: Seed
- Size: 2.6-3 million USD (figures differ slightly across filings)
- Date: October 2025
- Lead firms: Susa Ventures, Browder Capital, SF1.vc
- Strategic angels: Jeff Dean, Logan Kilpatrick, Dane Knecht
How the Universal Memory API Works
Knowledge Graph Engine
Every document or chat snippet is chunked, vectorised and linked inside an evolving graph. Relationships between people, topics and events are updated in real time so agents can reason over connections rather than plain text.
Human-style Memory Policies
Supermemory borrows from cognitive science:
* Recency and relevance bias – fresh or frequently accessed facts surface faster.
* Smart forgetting – stale items gradually decay, reducing token noise for LLMs.
* Context rewriting – summaries are refreshed as new evidence arrives, preventing drift.
Fast Developer Tooling
SDKs for Python and TypeScript let builders add long-term memory in minutes. A Chrome extension captures web highlights while integrations with Google Drive, OneDrive and Notion sync large repositories automatically, as detailed in the Dataconomy article.
Real-World Use Cases Rolling Out in 2025
- Productivity assistants – note-taking and writing apps retrieve ideas from past sessions so users never repeat themselves.
- Email clients – context-aware replies draw on years of correspondence to suggest next steps.
- Media editing – video tools locate relevant B-roll or subtitles through natural-language search across huge asset libraries.
- *Healthcare * – clinics summarise longitudinal patient histories while keeping data siloed per organisation.
- Research teams – scientists query distributed PDFs and lab notes with a single prompt, eliminating manual document hunts.
The Road Ahead
Shah says the fresh capital will fund deeper multimodal ingestion, adding native support for audio, CAD files and time-series logs. Engineering effort is also going into an on-device cache so privacy-sensitive enterprises can keep memory inside their own VPCs.
Supermemory is already working with design partners across productivity, healthcare and developer tooling. Builders who want early access can join the waitlist on the company’s website.
What is Supermemory and how does it solve AI’s “amnesia” problem?
Supermemory is a universal memory API that plugs into any AI application and gives it a persistent, user-owned memory layer. Instead of forgetting everything at the end of a chat, models can now recall months of emails, files, Slack threads, or Notion pages instantly. The system ingests multimodal data (PDFs, videos, chat logs, etc.), builds a living knowledge graph, and surfaces only the most relevant, recent, and cross-referenced facts when prompted. Early adopters report ~40 % drop in repetitive questions inside customer-support bots and 3× faster asset retrieval in video-editing suites.
How much funding did the 19-year-old founder raise and who backed him?
In October 2025 Dhravya Shah closed a $2.6 million seed round co-led by Susa Ventures, Browder Capital and SF1.vc, with Google AI chief Jeff Dean, Cloudflare CTO Dane Knecht, DeepMind PM Logan Kilpatrick and execs from OpenAI and Meta joining as angels. The round valued the company at ~$15 million post-money and was oversubscribed within ten days, according to TechCrunch coverage.
What real-world tasks already run on Supermemory?
- AI email clients that surface the right attachment from 50 000 messages in <500 ms
- Hospital portals that auto-summarize a patient’s multi-year record for doctors before every visit
- Video editors that fetch B-roll clips after a natural-language query such as “sunset drone shots from last summer”
- “Second-brain” notebooks that let users ask “What did I learn about ERC-4337?” and receive a concise answer drawn from Twitter bookmarks, PDFs and class notes
Developers add the memory layer with three lines of code using SDKs for OpenAI, Anthropic or Cloudflare Workers; end-user extensions for Chrome, Drive, Notion and OneDrive ship today.
How does the platform mimic human forgetting instead of hoarding data?
Supermemory applies smart decay curves: low-relevance tokens fade after days, while frequently accessed concepts are rewritten into higher-level summaries and stored in hierarchical tiers (working, short-term, long-term). This keeps the average memory index 87 % smaller than a naive vector dump, cutting latency and cost. Users can audit, edit or purge any memory node, ensuring GDPR/HIPAA compliance without model retraining.
What is Dhravya Shah’s track record before this startup?
Shah sold his first hosting company at 16, then built a Twitter-to-screenshot bot that Hypefury acquired while he was still in Mumbai. After moving to Arizona State he interned at Cloudflare and ran a 40-week “build-in-public” sprint that produced the prototype of Supermemory. The project gained 4 000 GitHub stars in two weeks, convincing him to drop IIT prep, accept an O-1 extraordinary-ability visa, and incorporate in San Francisco at age 19.