---
Your AI forgets. Every new chat starts with "let me give you some context." Your critical decisions, preferences, and insights are scattered across tools that don't talk to each other. Your head doesn't scale.
**CORE is your memory agent**. Not a database. Not a search box. A digital brain that replicates how human memory actually works—organizing episodes into topics, creating associations, and surfacing exactly what you need, when you need it.
## For Developers
CORE is a memory agent that gives your AI tools persistent memory and the ability to act in the apps you use.
**How it helps Claude Code:**
- **Preferences** → Surfaces during code review (formatting, patterns, tools)
- **Decisions** → Surfaces when encountering similar choices ("why we chose X over Y")
- **Directives** → Always available (rules like "always run tests", "never skip reviews")
- **Problems** → Surfaces when debugging (issues you've hit before)
- **Goals** → Surfaces when planning (what you're working toward)
- **Knowledge** → Surfaces when explaining (your expertise level)
**Right information, right time**—not context dumping.
- Context preserved across Claude Code, Cursor and other coding agents
- Take actions in Linear, GitHub, Slack, Gmail, Google Sheets and other apps you use
- Connect once via MCP, works everywhere
- Open-source and self-hostable; your data, your control
---
## What You Can Do
### 1. Never repeat yourself, context flows automatically
CORE becomes your persistent memory layer for coding agents. Ask any AI tool to pull relevant context—CORE's memory agent understands your intent and surfaces exactly what you need.
```txt
Search core memory for architecture decisions on the payment service
```
**What CORE does**: Classifies as Entity Query (payment service) + Aspect Query (decisions), filters by `aspect=Decision` and `entity=payment service`, returns decisions with their reasoning and timestamps.
```txt
What are my content guidelines from core to create the blog?
```
**What CORE does**: Aspect Query for Preferences/Directives related to content, surfaces your rules and patterns for content creation.

---
### 2. Take actions in your apps from Claude/Cursor
Connect your apps once, take actions from anywhere.
- Create/Read GitHub, Linear issues
- Draft/Send/Read an email and store relevant info in CORE
- Manage your calendar, update spreadsheet

---
### 3. Pick up where you left off claude code/cursor
Switching back to a feature after a week? Get caught up instantly.
```txt
What did we discuss about the checkout flow? Summarize from memory.
```
```txt
Refer to past discussions and remind me where we left off on the API refactor
```

---
## What Makes CORE Different
1. **Temporal Context Graph**: CORE doesn't just store facts — it remembers the story. When things happened, how your thinking evolved, what led to each decision. Your preferences, goals, and past choices — all connected in a graph that understands sequence and context.
2. **Memory Agent, Not RAG**: Traditional RAG asks "what text chunks look similar?" CORE asks "what does the user want to know, and where in the organized knowledge does that live?"
- **11 Fact Aspects**: Every fact is classified (Preference, Decision, Directive, Problem, Goal, Knowledge, Identity, etc.) so core surfaces your coding style preferences during code review, or past architectural decisions when you're designing a new feature.
- **5 Query Types**: CORE classifies your intent (Aspect Query, Entity Lookup, Temporal, Exploratory, Relationship) and routes to the exact search strategy. Looking for "my preferences"? It filters by aspect. "Tell me about Sarah"? Entity graph traversal. "What happened last week"? Temporal filter.
- **Intent-Driven Retrieval**: Classification first, search second. 3-4x faster than the old "search everything and rerank" approach (300-450ms vs 1200-2400ms).
3. **88.24% Recall Accuracy**: Tested on the LoCoMo benchmark. When you ask CORE something, it finds what's relevant. Not keyword matching, true semantic understanding with multi-hop reasoning.
4. **You Control It**: Your memory, your rules. Edit what's wrong. Delete what doesn't belong. Visualize how your knowledge connects. CORE is transparent, you see exactly what it knows.
5. **Open Source**: No black boxes. No vendor lock-in. Your digital brain belongs to you.
---
## Memory Agent vs RAG: Why It Matters
**Traditional RAG** treats memory as a search problem:
- Embeds all your text
- Searches for similarity
- Returns chunks
- No understanding of _what kind of information_ you need
**CORE Memory Agent** treats memory as a knowledge problem:
- **Classifies** every fact by type (Preference, Decision, Directive, etc.)
- **Understands** your query intent (looking for preferences? past decisions? recent events?)
- **Routes** to the exact search strategy (aspect filter, entity graph, temporal range)
- **Surfaces** exactly what you need, not everything that might be relevant
**Example:**
You ask: "What are my coding preferences?"
- **RAG**: Searches all your text for "coding" and "preferences", returns 50 chunks, hopes relevant ones are in there
- **CORE**: Classifies as Aspect Query (Preference), filters statements by `aspect=Preference`, returns 5 precise facts: "Prefers TypeScript", "Uses pnpm", "Avoids class components", etc.
**The Paradigm Shift**: CORE doesn't improve RAG. It replaces it with structured knowledge retrieval.
---
## 🚀 Quick Start
Choose your path:
| | **CORE Cloud** | **Self-Host** |
| ------------ | --------------------- | -------------------------- |
| Setup time | 5 minutes | 15 minutes |
| Best for | Try quickly, no infra | Full control, your servers |
| Requirements | Just an account | Docker, 4GB RAM |
### Cloud
1. **Sign up** at [app.getcore.me](https://app.getcore.me)
2. **Connect a source** (Claude, Cursor, or any MCP-compatible tool)
3. **Start using** CORE to perform any action or store about you in memory
### Self-Host
**Quick Deploy**
[](https://railway.com/deploy/core)
**Or with Docker**
1. Clone the repository:
```
git clone https://github.com/RedPlanetHQ/core.git
cd core
```
2. Configure environment variables in `core/.env`:
```
OPENAI_API_KEY=your_openai_api_key
```
3. Start the service
```
docker-compose up -d
```
Once deployed, you can configure your AI providers (OpenAI, Anthropic) and start building your memory graph.
👉 [View complete self-hosting guide](https://docs.getcore.me/self-hosting/docker)
> Note: We tried open-source models like Ollama or GPT OSS but facts generation were not good, we are still figuring out how to improve on that and then will also support OSS models.
## 🛠️ Installation
### Recommended