MIT Licensed · v0.3.0

Persistent semantic memory for any AI agent

27 MCP tools. SQLite or Firestore. Pluggable embeddings. Give your agent a mind that persists across every conversation.

Read Docs

Everything your agent needs to think

Not just storage. A cognitive layer that makes memories useful.

Semantic Search

Query by meaning, not keywords. Embeddings-powered retrieval that understands what your agent actually needs.

27 MCP Tools

observe(), query(), validate(), evolve(), wander(), believe(), and more. A full cognitive toolkit exposed via Model Context Protocol.

Session Continuity

Automatic context bridging across conversations. Your agent picks up exactly where it left off.

FSRS Scheduling

Spaced repetition for memory salience. Important memories surface first, stale ones decay gracefully.

Multi-Provider

Firestore or SQLite storage. Ollama, Vertex AI, or OpenAI embeddings. Swap providers without rewriting.

Type-Safe SDK

Full TypeScript. Every tool typed. Every response predictable. Build with confidence, not guesswork.

How it fits together

cortex-engine sits between your agent and its memory, handling storage, embeddings, and tool routing transparently.

  Your Agent (Claude, GPT, Gemini, etc.)
      |
      v
+---------------------------+
|     cortex-engine SDK     |
|                           |
|  observe() - query()      |
|  validate() - evolve()    |
|  wander() - believe()     |
+--------+--------+--------+
         |        |
    +----v--+ +---v-----------+
    |Storage| |Embedding Engine|
    |       | |                |
    | SQLite  | Ollama         |
    | Firestore| Vertex AI     |
    |         | OpenAI         |
    +----+--+ +---+------------+
         |        |
         v        v
+---------------------------+
|      MCP Server           |
|  27 tools exposed via     |
|  Model Context Protocol   |
+---------------------------+

Get started in 30 seconds

One command. Full cognitive layer. No configuration required.