MCP Local RAG — Search below the surface.

# MCP Local RAG [![GitHub stars](https://img.shields.io/github/stars/shinpr/mcp-local-rag?style=social)](https://github.com/shinpr/mcp-local-rag) [![npm version](https://img.shields.io/npm/v/mcp-local-rag.svg)](https://www.npmjs.com/package/mcp-local-rag) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![TypeScript](https://img.shields.io/badge/TypeScript-5.0-blue.svg?logo=typescript&logoColor=white)](https://www.typescriptlang.org/) [![MCP Registry](https://img.shields.io/badge/MCP-Registry-green.svg)](https://registry.modelcontextprotocol.io/) Local RAG for developers via MCP or CLI. Semantic search with keyword boost for exact technical terms — fully private, zero setup. ## Features - **Semantic search with keyword boost** Vector search first, then keyword matching boosts exact matches. Terms like `useEffect`, error codes, and class names rank higher—not just semantically guessed. - **Smart semantic chunking** Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change. - **Quality-first result filtering** Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks. - **Runs entirely locally** No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download. - **Zero-friction setup** One `npx` command. No Docker, no Python, no servers to manage. Use via MCP, CLI, or both. Optional [Agent Skills](#agent-skills) help AI assistants form better queries and interpret results. ## Quick Start Set `BASE_DIR` to the folder you want to search. Documents must live under it. Add the MCP server to your AI coding tool: **For Cursor** — Add to `~/.cursor/mcp.json`: ```json { "mcpServers": { "local-rag": { "command": "npx", "args": ["-y", "mcp-local-rag"], "env": { "BASE_DIR": "/path/to/your/documents" } } } } ``` **For Codex** — Add to `~/.codex/config.toml`: ```toml [mcp_servers.local-rag] command = "npx" args = ["-y", "mcp-local-rag"] [mcp_servers.local-rag.env] BASE_DIR = "/path/to/your/documents" ``` **For Claude Code** — Run this command: ```bash claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag ``` Restart your tool, then start using it: ``` You: "Ingest api-spec.pdf" Assistant: Successfully ingested api-spec.pdf (47 chunks created) You: "What does the API documentation say about authentication?" Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens. The flow is described in section 3.2... ``` **Or use directly as CLI** — no MCP server needed: ```bash npx mcp-local-rag ingest ./docs/ npx mcp-local-rag query "authentication API" ``` That's it. No Docker, no Python, no server setup. ## Why This Exists You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs. **Privacy.** Your documents might contain sensitive data. This runs entirely locally. **Cost.** External embedding APIs charge per use. This is free after the initial model download. **Offline.** Works without internet after setup. **Code search.** Pure semantic search misses exact terms like `useEffect` or `ERR_CONNECTION_REFUSED`. Keyword boost catches both meaning and exact matches. **Agent reality.** In practice, many AI environments mainly use tool calling. CLI support and Agent Skills make the same workflows available even without full MCP integration. ## Usage mcp-local-rag provides two interfaces: an **MCP server** for AI coding tools and a **CLI** for direct use from the terminal. ### Using with MCP The MCP server provides 6 tools: `ingest_file`, `ingest_data`, `query_documents`, `list_files`, `delete_file`, `status`. #### Ingesting Documents ``` "Ingest the document at /Users/me/docs/api-spec.pdf" ``` Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database. Re-ingesting the same file replaces the old version automatically. #### Ingesting HTML Content Use `ingest_data` to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.): ``` "Fetch https://example.com/docs and ingest the HTML" ``` The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for: - Web documentation - HTML retrieved by the AI assistant - Clipboard content HTML is automatically cleaned—you get the article content, not the boilerplate. > **Note:** The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to `ingest_data`. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content. #### Searching Documents ``` "What does the API documentation say about authentication?" "Find information about rate limiting" "Search for error handling best practices" ``` Search uses semantic similarity with keyword boost. This means `useEffect` finds documents containing that exact term, not just semantically similar React concepts. Results include text content, source file, document title, and relevance score. The document title provides context for each chunk, helping identify which document a result belongs to. Adjust result count with `limit` (1-20, default 10). #### Managing Files ``` "List all files in BASE_DIR and their ingested status" # See what's indexed "Delete old-spec.pdf from RAG" # Remove a file "Show RAG server status" # Check system health ``` ### Using as CLI All MCP tools are also available as CLI commands — no MCP server needed: ```bash npx mcp-local-rag ingest ./docs/ # Bulk ingest files npx mcp-local-rag query "authentication API" # Search documents npx mcp-local-rag list # Show ingestion status npx mcp-local-rag status # Database stats npx mcp-local-rag delete ./docs/old.pdf # Remove content npx mcp-local-rag delete --source "https://..." # Remove by source URL ``` `query`, `list`, `status`, and `delete` output JSON to stdout for piping (e.g., `| jq`). `ingest` outputs progress to stderr. Global options (`--db-path`, `--cache-dir`, `--model-name`) go before the subcommand. Run `npx mcp-local-rag --help` for details. > ⚠️ The CLI does **not** read your MCP client config (`mcp.json`, `config.toml`, etc.). Configure the CLI via flags or environment variables as shown below. #### Configuration **CLI flags** — global options go before the subcommand, subcommand options go after: ```bash npx mcp-local-rag --db-path ./my-db query "auth" --base-dir ./docs ``` **Environment variables** — set in your shell: ```bash export DB_PATH=./my-db export BASE_DIR=./docs npx mcp-local-rag query "auth" ``` **Sharing config between MCP and CLI** — if your MCP client inherits shell environment variables, you can set them in your shell profile (e.g., `~/.zshrc`) so both use the same values. Otherwise, set them explicitly in your MCP config as well. ```bash export BASE_DIR=/path/to/your/documents export DB_PATH=/path/to/lancedb ``` Configuration is resolved in this order: 1. CLI flags (highest priority) 2. Environment variables 3. Defaults For the full list of CLI flags, environment variables, and defaults, see [Configuration](#configuration). For CLI-only setups (no MCP server), install [Agent Skills](#agent-skills) so your AI assistant can form better queries and interpret results consistently. > ⚠️ **`--model-name` must match your MCP server config.** Using a different embedding model against an existing database produces incompatible vectors, silently degrading search quality. ## Search Tuning Adjust these for your use case: | Variable | Default | Description | |----------|---------|-------------| | `RAG_HYBRID_WEIGHT` | `0.6` | Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. | | `RAG_GROUPING` | (not set) | `similar` for top group only, `related` for top 2 groups. | | `RAG_MAX_DISTANCE` | (not set) | Filter out low-relevance results (e.g., `0.5`). | | `RAG_MAX_FILES` | (not set) | Limit results to top N files (e.g., `1` for single best file). | ### Code-focused tuning For codebases and API specs, increase keyword boost so exact identifiers (`useEffect`, `ERR_*`, class names) dominate ranking: ```json "env": { "RAG_HYBRID_WEIGHT": "0.7", "RAG_GROUPING": "similar" } ``` - `0.7` — balanced semantic + keyword - `1.0` — aggressive; exact matches strongly rerank results Keyword boost is applied *after* semantic filtering, so it improves precision without surfacing unrelated matches. ## How It Works **TL;DR:** - Documents are chunked by semantic similarity, not fixed character counts - Each chunk is embedded locally using Transformers.js - Search uses semantic similarity with keyword boost for exact matches - Results are filtered based on relevance gaps, not raw scores ### Details When you ingest a document, the parser extracts text based on file type (PDF via `mupdf`, DOCX via `mammoth`, text files directly). The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results. Each chunk goes through a Transformers.js embedding model (default: `all-MiniLM-L6-v2`, configurable via `MODEL_NAME`), converting text into vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process. When you search: 1. Your query becomes a vector using the same model 2. Semantic (vector) search finds the most relevant chunks 3. Quality filters apply (distance threshold, grouping) 4. Keyword matches boost rankings for exact term matching The keyword boost ensures exact terms like `useEffect` or error codes rank higher when they match. ## Agent Skills [Agent Skills](https://agentskills.io/) provide optimized prompts that help AI assistants use RAG tools more effectively. Install skills for better query formulation, result interpretation, and ingestion workflows: ```bash # Claude Code (project-level) npx mcp-local-rag skills install --claude-code # Claude Code (user-level) npx mcp-local-rag skills install --claude-code --global # Codex npx mcp-local-rag skills install --codex ``` Skills include: - **Query optimization**: Better search query formulation - **Result interpretation**: Score thresholds and filtering guidelines - **HTML ingestion**: Format selection and source naming ### Ensuring Skill Activation Skills are loaded automatically in most cases—AI assistants scan skill metadata and load relevant instructions when needed. For consistent behavior: **Option 1: Explicit request (natural language)** Before RAG operations, request in natural language: - "Use the mcp-local-rag skill for this search" - "Apply RAG best practices from skills" **Option 2: Add to agent instruction file** Add to your `AGENTS.md`, `CLAUDE.md`, or other agent instruction file: ``` When using query_documents, ingest_file, or ingest_data tools, apply the mcp-local-rag skill for better query formulation and result interpretation. ``` ## Configuration ### Environment Variables and CLI Flags Both MCP and CLI use the same environment variables. The CLI also accepts equivalent flags. | Environment Variable | CLI Flag | Default | Description | |---------------------|----------|---------|-------------| | `BASE_DIR` | `--base-dir` | Current directory | Document root directory (security boundary) | | `DB_PATH` | `--db-path` | `./lancedb/` | Vector database location | | `CACHE_DIR` | `--cache-dir` | `./models/` | Model cache directory | | `MODEL_NAME` | `--model-name` | `Xenova/all-MiniLM-L6-v2` | HuggingFace model ID ([available models](https://huggingface.co/models?library=transformers.js&pipeline_tag=feature-extraction)) | | `MAX_FILE_SIZE` | `--max-file-size` | `104857600` (100MB) | Maximum file size in bytes | | `CHUNK_MIN_LENGTH` | `--chunk-min-length` | `50` | Minimum chunk length in characters (1–10000) | **Model choice tips:** - Multilingual docs → e.g., `onnx-community/embeddinggemma-300m-ONNX` (100+ languages) - Scientific papers → e.g., `sentence-transformers/allenai-specter` (citation-aware) - Code repositories → default often suffices; keyword boost matters more (or `jinaai/jina-embeddings-v2-base-code`) ⚠️ Changing `MODEL_NAME` changes embedding dimensions. Delete `DB_PATH` and re-ingest after switching models. ### Client-Specific Setup **Cursor** — Global: `~/.cursor/mcp.json`, Project: `.cursor/mcp.json` ```json { "mcpServers": { "local-rag": { "command": "npx", "args": ["-y", "mcp-local-rag"], "env": { "BASE_DIR": "/path/to/your/documents" } } } } ``` **Codex** — `~/.codex/config.toml` (note: must use `mcp_servers` with underscore) ```toml [mcp_servers.local-rag] command = "npx" args = ["-y", "mcp-local-rag"] [mcp_servers.local-rag.env] BASE_DIR = "/path/to/your/documents" ``` **Claude Code**: ```bash claude mcp add local-rag --scope user \ --env BASE_DIR=/path/to/your/documents \ -- npx -y mcp-local-rag ``` ### First Run The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline. ### Security - **Path restriction**: Only files within `BASE_DIR` are accessible - **Local only**: No network requests after model download - **Model source**: Official HuggingFace repository ([verify here](https://huggingface.co/Xenova/all-MiniLM-L6-v2))
Performance Tested on MacBook Pro M1 (16GB RAM), Node.js 22: **Query Speed**: ~1.2 seconds for 10,000 chunks (p90 < 3s) **Ingestion** (10MB PDF): - PDF parsing: ~8s - Chunking: ~2s - Embedding: ~30s - DB insertion: ~5s **Memory**: ~200MB idle, ~800MB peak (50MB file ingestion) **Concurrency**: Handles 5 parallel queries without degradation.
Troubleshooting ### "No results found" Documents must be ingested first. Run `"List all ingested files"` to verify. ### Model download failed Check internet connection. If behind a proxy, configure network settings. The model can also be [downloaded manually](https://huggingface.co/Xenova/all-MiniLM-L6-v2). ### "File too large" Default limit is 100MB. Split large files or increase `MAX_FILE_SIZE`. ### Slow queries Check chunk count with `status`. Large documents with many chunks may slow queries. Consider splitting very large files. ### "Path outside BASE_DIR" Ensure file paths are within `BASE_DIR`. Use absolute paths. ### MCP client doesn't see tools 1. Verify config file syntax 2. Restart client completely (Cmd+Q on Mac for Cursor) 3. Test directly: `npx mcp-local-rag` should run without errors
FAQ **Is this really private?** Yes. After model download, nothing leaves your machine. Verify with network monitoring. **Can I use this offline?** Yes, after the first model download (~90MB). **How does this compare to cloud RAG?** Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost. **What file formats are supported?** PDF, DOCX, TXT, Markdown, and HTML (via `ingest_data`). Not yet: Excel, PowerPoint, images. **Can I change the embedding model?** Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions. **GPU acceleration?** Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases. **Multi-user support?** No. Designed for single-user, local access. Multi-user would require authentication/access control. **How to backup?** Copy `DB_PATH` directory (default: `./lancedb/`).
Development ### Building from Source ```bash git clone https://github.com/shinpr/mcp-local-rag.git cd mcp-local-rag pnpm install ``` ### Testing ```bash pnpm test # Run all tests pnpm run test:watch # Watch mode ``` ### Code Quality ```bash pnpm run type-check # TypeScript check pnpm run check:fix # Lint and format pnpm run check:deps # Circular dependency check pnpm run check:all # Full quality check ``` ### Project Structure ``` src/ index.ts # Entry point server/ # MCP tool handlers cli/ # CLI subcommands (ingest) parser/ # PDF, DOCX, TXT, MD parsing chunker/ # Text splitting embedder/ # Transformers.js embeddings vectordb/ # LanceDB operations __tests__/ # Test suites ```
## Contributing Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for setup and guidelines. ## License MIT License. Free for personal and commercial use. ## Blog Posts - [Building a Local RAG for Agentic Coding](https://www.norsica.jp/blog/local-rag-agentic-coding) — Technical deep-dive into the semantic chunking and hybrid search design. ## Acknowledgments Built with [Model Context Protocol](https://modelcontextprotocol.io/) by Anthropic, [LanceDB](https://lancedb.com/), and [Transformers.js](https://huggingface.co/docs/transformers.js).