A persistent, agent-agnostic AI memory system powered by Harper Fabric. Clone, configure your API keys, deploy, and give all your AI tools a shared brain.
When you use Claude, ChatGPT, or Cursor, your conversation history and learned context are siloed inside each platform. Switch tools, and your AI gets amnesia. This is context rot.
Deploy a centralized vector database on Harper Fabric and connect it to your AI agents via MCP (Model Context Protocol). All your tools read and write to the same unified memory pool.
Slack / GitHub / ... ──webhook──┐
├──▶ Harper Fabric ◀──MCP──▶ Claude Desktop / Cursor / ...
CLAUDE.md / .cursor/ ... ──CLI/API──┘ (vector DB)
INGESTION SOURCES HARPER FABRIC CLUSTER
┌──────────────┐
│ Slack │ ──▶ ┌─────────────────────────────────────────────┐
│ Events API │ │ Webhook Resource (classify + embed) │
└──────────────┘ │ │ │
┌──────────────┐ │ ┌───────────▼─────────────────────────┐ │
│ GitHub / ... │ ──▶ │ │ Memory Table (HNSW vector index) │ │
└──────────────┘ │ └─────────────────────────────────────┘ │
│ │
┌──────────────┐ │ SynapseIngest (parse + classify + embed) │
│ CLAUDE.md │ │ │ │
│ .cursor/rules│ ──▶ │ ┌───────────▼─────────────────────────┐ │
│ .windsurf/ │ │ │ SynapseEntry Table (HNSW idx) │ │
│ copilot-inst │ │ └───────────┬─────────────────────────┘ │
└──────────────┘ │ │ │
(synapse CLI/API) │ ┌───────────▼─────────────────────────┐ │
│ │ MCP Server + Search/Emit endpoints │ │
│ └───────────┬─────────────────────────┘ │
└─────────────┼───────────────────────────── ┘
│ MCP JSON-RPC
┌────────────────────┼────────────────────┐
▼ ▼ ▼
Claude Desktop Cursor Any MCP Client
- Node.js 22+ (recommended: 24 LTS)
Create your free cluster at fabric.harper.fast:
- Create an account and verify your email
- Create an organization
- Create a cluster (free tier, no credit card required)
- Note your cluster URL, username, and password
git clone https://github.com/HarperFast/Harper-Cortex.git
cd Harper-Cortex
npm install -g harperdb # Install the Harper runtime (one-time)
npm install # Install project dependenciesSign up for these services and grab your API keys. All have free tiers.
| Service | Sign Up | What You Need |
|---|---|---|
| Anthropic | console.anthropic.com | API key (used for message classification) |
| Voyage AI | dash.voyageai.com | API key (used for vector embeddings) |
| Slack | api.slack.com/apps | Create a Slack app. See docs/slack-app-setup.md for the full walkthrough. |
cp .env.example .envOpen .env and paste in your Harper Fabric credentials from Step 1 and the API keys from Step 3. See Environment Variables for details on each variable.
npm run devThis starts Harper locally on http://localhost:9926 with the Memory table, vector index, and all endpoints ready. Test it:
curl -X POST http://localhost:9926/MemorySearch/ \
-H "Content-Type: application/json" \
-d '{"query": "test search"}'For Slack webhook testing during local development, use a tunnel:
ngrok http 9926 # Then use the ngrok URL as your Slack Events API request URLnpm run deployOnce deployed, update your Slack Events API request URL to point at your cluster: https://your-cluster.harperfabric.com/SlackWebhook
See docs/mcp-setup.md for configuration instructions.
| Variable | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY |
Yes | Anthropic API key for Claude (message classification) |
VOYAGE_API_KEY |
Yes | Voyage AI API key (vector embedding generation) |
SLACK_SIGNING_SECRET |
For Slack | Slack app signing secret (webhook verification) |
SLACK_BOT_TOKEN |
For Slack | Slack bot user OAuth token (xoxb-...) |
CLI_TARGET |
For deploy | Harper Fabric cluster URL (e.g., https://cluster.org.harperfabric.com) |
CLI_TARGET_USERNAME |
For deploy | Harper cluster admin username |
CLI_TARGET_PASSWORD |
For deploy | Harper cluster admin password |
SYNAPSE_ENDPOINT |
For Synapse CLI | Base URL of Harper-Cortex deployment |
SYNAPSE_PROJECT |
For Synapse CLI | Project ID to scope context entries |
SYNAPSE_AUTH |
For Synapse CLI | Authorization header (e.g. Basic dXNlcjpwYXNz) |
| Endpoint | Method | Description |
|---|---|---|
/SlackWebhook |
POST | Receives Slack Events API payloads. Classifies, embeds, and stores messages. |
/MemorySearch |
POST | Semantic search. Send { "query": "...", "limit": 10, "filters": {} } |
/Memory/ |
GET | List all memories (with pagination) |
/Memory/{id} |
GET | Get a single memory by ID |
{
"query": "Why did we change the caching strategy?",
"limit": 10,
"filters": {
"classification": "decision",
"source": "slack",
"channelId": "C0123456",
"authorId": "U0123456"
}
}| Command | Description |
|---|---|
npm run dev |
Start Harper locally for development |
npm run deploy |
Deploy to Harper Fabric |
npm test |
Run all tests |
npm start |
Start Harper in production mode |
npm testTests use Node.js built-in test runner with module mocking. No extra test dependencies required.
├── config.yaml # Harper application configuration
├── schema.graphql # Database schema (Memory + SynapseEntry tables)
├── resources.js # Core logic: webhook, search, Synapse resource classes
├── package.json # Dependencies and scripts
├── .env.example # Environment variable template
├── .nvmrc # Node.js version (24 LTS)
├── bin/
│ └── synapse.js # Synapse CLI (sync, emit, search, watch, status)
├── test/ # Test suite (82 tests)
│ ├── classify.test.js
│ ├── embedding.test.js
│ ├── webhook.test.js
│ ├── search.test.js
│ ├── synapse-classify.test.js
│ ├── synapse-search.test.js
│ ├── synapse-ingest.test.js
│ └── synapse-emit.test.js
└── docs/ # Guides
├── architecture.md
├── synapse-design.md
├── slack-app-setup.md
└── mcp-setup.md
- A source sends an event via webhook (e.g. Slack message, GitHub issue, Linear task)
- Classification: Claude Haiku categorizes the content (decision, action_item, knowledge, etc.) and extracts entities (people, projects, technologies)
- Embedding: Voyage AI generates a 1024-dimensional vector embedding
- Storage: Raw text, classification, entities, and embedding are stored in the Memory table with HNSW vector indexing
- Retrieval: Any MCP-connected AI client queries the Memory table using hybrid search (vector similarity + attribute filters)
- Ingest:
synapse syncreads your tool context files (CLAUDE.md,.cursor/rules/,.windsurf/rules/,copilot-instructions.md) and POSTs them to/SynapseIngest - Parse: Each source format is split into discrete entries; duplicate content is deduplicated via content hash
- Classify + embed: Each entry is classified into a type (
intent,constraint,artifact,history) and embedded with Voyage AI - Storage: Entries are stored in the SynapseEntry table with HNSW vector indexing, scoped by
projectId - Retrieval:
synapse searchor any MCP client queries/SynapseSearch;synapse emitformats entries back into any target tool's native format
This repo ships with Slack + Anthropic + Voyage AI as the default stack. The architecture is designed to be swappable - add a new webhook resource class for any ingestion source, or change the LLM/embedding provider in resources.js.
The system ingests data via webhooks. Add new sources by creating a new Resource class following the same pattern as SlackWebhook.
| Platform | Webhook Support | Good For |
|---|---|---|
| Slack | Events API | Team conversations, decisions, standups (included) |
| GitHub | Webhooks | Issues, PRs, code reviews, commit messages |
| Linear | Webhooks | Task tracking, sprint decisions, bug reports |
| Jira | Webhooks | Project management, issue tracking |
| Notion | API polling | Wiki pages, meeting notes, documentation |
| Discord | Gateway / Webhooks | Community discussions, support threads |
| Google Drive | Push notifications | Shared docs, spreadsheets, presentations |
| Forwarding / SMTP webhook | Client communications, vendor threads | |
| Microsoft Teams | Webhooks | Enterprise team conversations |
Swap the classification model by changing CLASSIFICATION_MODEL in resources.js and updating the SDK import.
| Provider | Recommended Model | Trade-off |
|---|---|---|
| Anthropic | Claude Haiku 3.5 | Best structured JSON output (default) |
| OpenAI | GPT-4o-mini | Cheapest, fast, good at JSON |
| Gemini 2.0 Flash | Generous free tier | |
| Ollama (local) | Llama 3 / Mistral | Full privacy, no API costs, requires local GPU |
Swap the embedding provider by changing generateEmbedding() in resources.js. If you change the vector dimensions, re-embed all existing records.
| Provider | Recommended Model | Dimensions | Trade-off |
|---|---|---|---|
| Voyage AI | voyage-3 | 1024 | Anthropic-recommended (default) |
| OpenAI | text-embedding-3-small | 1536 | Most widely adopted |
| Cohere | embed-v4 | 1024 | Strong multilingual support |
| Ollama (local) | nomic-embed-text | 768 | Full privacy, zero API cost |
Any MCP-compliant AI client can connect to the Harper MCP Server and query your memory pool.
| Client | Status |
|---|---|
| Claude Desktop | Fully supported (default) |
| Cursor | Fully supported (same MCP config) |
| Windsurf | MCP-compatible |
| Claude Code (CLI) | MCP-compatible |
| Any MCP client | Open standard - works with any compliant tool |
Synapse extends Harper-Cortex into a Universal Context Broker — a system that bridges context across AI development tools. When you switch from Claude Code to Cursor, or add a new team member, the "Why" behind architectural decisions is normally lost. Synapse captures it.
INGEST (Tool → Harper) EMIT (Harper → Tool)
CLAUDE.md ──────┐ ┌──▶ CLAUDE.md / SYNAPSE.md
.cursor/rules/ ─┤ ┌────────────┐ ├──▶ .cursor/rules/*.mdc
.windsurf/ ─┤─▶│ Synapse │ ├──▶ .windsurf/rules/*.md
copilot-inst. ─┤ │ Ingest │ └──▶ copilot-instructions.md
Manual / Slack ─┘ └─────┬──────┘
│ ┌──────────────────┐
┌───────▼────────┐ │ SynapseEmit │
│ SynapseEntry │◀─────│ query → format │
│ (HNSW indexed) │ └──────────────────┘
└───────┬────────┘
│ MCP JSON-RPC
┌─────────────┼─────────────┐
▼ ▼ ▼
Claude Desktop Cursor Any MCP Client
| Type | Purpose | Example |
|---|---|---|
intent |
The "Why" | "Chose HarperDB for HNSW vector search" |
constraint |
Must/Must-Not rules | "Never use an ORM — raw SQL only" |
artifact |
References | "Architecture diagram at docs/arch.png" |
history |
Failed paths | "Tried Redis Streams, abandoned due to durability" |
# Install globally after cloning
npm install -g .
# Sync your context files to Harper-Cortex
SYNAPSE_PROJECT=my-app synapse sync
# Search across all context
SYNAPSE_PROJECT=my-app synapse search "why did we choose postgres"
# Emit context in Cursor's native format (writes .mdc files)
SYNAPSE_PROJECT=my-app synapse emit --target cursor --write
# Watch context files and auto-sync on change
SYNAPSE_PROJECT=my-app synapse watch
# Show entry counts by type and source
SYNAPSE_PROJECT=my-app synapse status| Endpoint | Method | Description |
|---|---|---|
/SynapseIngest |
POST | Ingest context from any tool. Parses, classifies, embeds, and stores. |
/SynapseSearch |
POST | Semantic search scoped to a project. |
/SynapseEmit |
POST | Emit context formatted for a target tool. |
/SynapseEntry/ |
GET | List/browse all context entries. |
See docs/synapse-design.md for full architecture details.
This project uses Harper Agent Skills — reusable AI agent instructions that guide Claude and other AI tools to follow Harper best practices when contributing code. Skills are tracked in skills-lock.json and installed into .agents/skills/ (excluded from version control).
| Skill | Description |
|---|---|
harper-best-practices |
Schema design, automatic APIs, authentication, custom resources, caching, vector indexing, and deployment patterns for Harper applications |
AI agents (Claude Code, Cursor, etc.) load skills automatically from .agents/skills/ and apply the relevant guidelines when making changes to the codebase.
MIT