Features SDKs Architecture GitHub
AI Agent Memory Platform

The memory engine
for AI agents

Persistent, searchable, semantic memory for AI agents — so they learn, adapt, and improve across every session. Built in Rust. Single binary.

Single binary Zero dependencies Built in Rust
agent memory
# Store agent memory
POST /v1/memory/store
{
  "agent_id": "assistant-1",
  "text": "User prefers TypeScript",
  "memory_type": "semantic",
  "importance": 0.9
}

# Recall by meaning
POST /v1/memory/recall
{
  "query": "language preferences",
  "top_k": 5
}

# → Result
{ "score": 0.97, "text": "User prefers TypeScript" }
0
Query latency
0
Single binary
0
MCP tools
0
Native SDKs
The problem

Your agents have amnesia

Every session starts from zero. Every mistake repeated. Every preference forgotten. Context windows aren't memory.

01
No cross-session context
Each conversation is isolated. Your agent can't build on previous interactions.
02
Knowledge doesn't accumulate
Insights from thousands of interactions vanish. Your agent never gets smarter.
03
Context window is not memory
Stuffing prompts with history is expensive, fragile, and hits a hard ceiling.
$ agent.recall("user preferences")
Error: No memory found.
Context window empty.
$ agent.session_count
→ 1,847 sessions
→ 0 memories persisted
$ agent.context_cost
→ $4,200/mo stuffing prompts
→ 0 knowledge retained
Capabilities

Everything your agents need

Agent memory, hybrid search, built-in embeddings, knowledge graphs, 45 MCP tools, and a 34-page admin dashboard. One binary.

Vector + Hybrid Search
Cosine, euclidean, dot product with IVF, HNSW, IVFPQ indexes. BM25 full-text. Tunable hybrid weights.
Agent Memory
Store, recall, consolidate, forget. Episodic, semantic, procedural, and working memory with importance scoring.
Built-in Inference
Auto-embed text on upsert and query. No external service needed. HuggingFace models built in.
MCP Native
45 tools for any MCP-compatible agent. Claude, Cursor, Windsurf. Memory, search, knowledge — all as tools.
Knowledge Graph
Auto-construct from stored memories. Similarity edges, cluster summarization, deduplication.
Dashboard & CLI
34-page Leptos/WASM admin dashboard. Full-featured dk CLI. Query builder, batch ops, analytics.
SDKs

Your language.
Your way.

Native SDKs for Python, TypeScript, Go, and Rust. Plus REST and gRPC APIs.

Store & Recall
Semantic memory with importance scoring
Session Lifecycle
Context survives across conversations
Multi-Agent
Per-agent namespaces, hundreds of agents
from dakera import DakeraClient

client = DakeraClient("http://localhost:3000")

# Store agent memory
client.memory_store(
    agent_id="assistant-1",
    text="User prefers TypeScript",
    memory_type="semantic",
    importance=0.9
)

# Recall by meaning
memories = client.memory_recall(
    agent_id="assistant-1",
    query="language preferences",
    top_k=5
)
import { DakeraClient } from "dakera"

const client = new DakeraClient("http://localhost:3000")

await client.memoryStore({
  agentId: "assistant-1",
  text: "User prefers TypeScript",
  memoryType: "semantic",
  importance: 0.9
})

const memories = await client.memoryRecall({
  agentId: "assistant-1",
  query: "language preferences",
  topK: 5
})
import "github.com/dakera/dakera"

client := dakera.NewClient("http://localhost:3000")

client.MemoryStore(ctx, &dakera.Memory{
    AgentID:    "assistant-1",
    Text:       "User prefers TypeScript",
    MemoryType: "semantic",
    Importance: 0.9,
})

memories, _ := client.MemoryRecall(ctx, "assistant-1", "language preferences", 5)
use dakera_client::DakeraClient;

let client = DakeraClient::new("http://localhost:3000");

// Store agent memory
client.memory_store(&Memory {
    agent_id: "assistant-1".into(),
    text: "User prefers TypeScript".into(),
    memory_type: "semantic".into(),
    importance: 0.9,
    ..Default::default()
}).await?;

// Recall by meaning
let memories = client
    .memory_recall("assistant-1", "language preferences", 5)
    .await?;
# Store memory
curl -X POST localhost:3000/v1/memory/store \
  -H "Content-Type: application/json" \
  -d '{"agent_id":"assistant-1","text":"User prefers TS","importance":0.9}'

# Recall
curl -X POST localhost:3000/v1/memory/recall \
  -d '{"agent_id":"assistant-1","query":"language preferences","top_k":5}'

# Text search with auto-embedding
curl -X POST localhost:3000/v1/namespaces/docs/query-text \
  -d '{"text":"semantic search systems","top_k":5}'
Architecture

Batteries included

Nine Rust crates. One binary. No external dependencies.

Client
REST API
gRPC
MCP Server
Dashboard
CLI
Middleware
Auth & API Keys
Rate Limiting
Audit Log
Prometheus
Engine
Inference
Vector Search
BM25
Agent Memory
Knowledge Graph
Clustering
Storage
In-Memory
Filesystem
S3 / MinIO
How it works

Three steps. Infinite memory.

01
Store
Every conversation, decision, and outcome is stored with semantic embeddings and importance scores.
02
Recall
At each session, Dakera AI surfaces the most relevant memories via hybrid search. Context that matters.
03
Learn
Merge overlapping memories, surface patterns, and build genuine knowledge over time.

Give your agents
a memory

The memory engine purpose-built for AI agents. No vendor lock-in. One binary.

Built in RustSingle binaryZero dependenciesProduction ready