Stop picking databases. Stop wiring embeddings. Stop managing schema. One command and your agent has storage, semantic search, branching, sync, and auto-summarization — all production-grade, all local-first.
// zero config — auto-provisions on first run import clawdb from '@clawdb/sdk' const db = await clawdb() await db.memory.remember('user prefers dark mode') const hits = await db.memory.search('user preferences') // ✓ 1 result · score: 0.97 · 3ms
# zero config — auto-provisions on first run from clawdb import clawdb db = await clawdb() await db.memory.remember("user prefers dark mode") hits = await db.memory.search("user preferences") # ✓ 1 result · score: 0.97 · 4ms
use clawdb::prelude::*; let db = ClawDB::open_default().await?; let sx = db.session(agent_id, "assistant", scopes).await?; db.remember(&sx, "user prefers dark mode").await?; let hits = db.search(&sx, "user preferences").await?; // ✓ 1 result · score: 0.97 · <1ms cached
// Add ClawDB to Claude Desktop in one command: npx @clawdb/mcp-adapter --install-claude // ✓ Restart Claude Desktop — memory is live // Or for Cursor / VS Code / Continue / Zed: npx @clawdb/mcp-adapter --install-cursor npx @clawdb/mcp-adapter --install-vscode npx @clawdb/mcp-adapter --install-continue npx @clawdb/mcp-adapter --install-zed // Tools exposed to every AI editor: // clawdb_remember · clawdb_search · clawdb_recall // clawdb_branch_fork · clawdb_branch_merge
import { ClawDBRetriever, ClawDBChatMessageHistory, createClawDBTools } from '@clawdb/langchain' const retriever = new ClawDBRetriever({ client: db, topK: 10 }) const history = new ClawDBChatMessageHistory({ client: db, sessionId }) const tools = createClawDBTools(db) // Drop into any LangChain chain / agent: const chain = RunnableSequence.from([retriever, llm]) // ✓ Persistent memory across all sessions
import { ClawDBPlugin } from '@clawdb/openclaw' // One line — agent gets a full database const agent = new OpenClawAgent({ plugins: [ClawDBPlugin()] }) // Auto-injects memory context before each turn. // Auto-stores user + agent messages after each turn. // Syncs and closes cleanly on shutdown. // ✓ Zero additional code required
import { createClawDBAgentTools, withClawDBMemory } from '@clawdb/openai-agents' // Add explicit tools to any OpenAI agent const tools = createClawDBAgentTools(db, { enableBranching: true }) // Or wrap the agent for automatic memory const agent = withClawDBMemory(myAgent, db) // ✓ Memory in/out on every run, zero boilerplate
import clawdb "github.com/Claw-DB/claw-sdk/sdks/go" db, _ := clawdb.New(clawdb.Options{ AgentID: "my-agent", // Endpoint auto-detected from env }) defer db.Close() id, _ := db.Memory.Remember(ctx, "user prefers dark mode", nil) hits, _ := db.Memory.Search(ctx, "user preferences", nil) // ✓ 1 result · score: 0.97
Three steps. The last one is optional. Most agents only need the first.
ClawDB detects your environment, picks the right backend, downloads the server binary if needed, and starts it — all in one command.
Import the SDK for your language. The auto-provision function reads env vars, connects to the running server, or starts one — whichever applies.
Get an API key from cloud.clawdb.dev and set CLAWDB_API_KEY. Your agent's memory syncs across devices and persists across deployments.
The same agent memory loop. Before and after.
# requirements.txt keeps growing sqlalchemy psycopg2-binary pinecone-client sentence-transformers redis celery custom-auth-lib your-memory-module # app.py — 200 lines of glue embed = model.encode(text) pinecone.upsert(id, embed) db.execute("INSERT INTO memories...") redis.set(f"cache:{id}", ...) if not check_perms(user, resource): raise PermissionDenied celery.send_task('summarize', ...) # cross fingers and deploy
# requirements.txt clawdb # app.py — done from clawdb import clawdb db = await clawdb() # store + embed + index + cache: one call await db.memory.remember(content) # guard runs implicitly, every op hits = await db.memory.search(query) # branch, simulate, merge b = await db.branch.fork("trial") await db.branch.merge(b, trunk) # auto-summarization in background await db.reflect() # ✓ 6 engines · 1 process · 0 config
Native adapters for every major AI framework and tool. One database, everywhere.
import { ClawDBRetriever, ClawDBChatMessageHistory, createClawDBTools } from '@clawdb/langchain' // Drop-in retriever for any RAG pipeline const retriever = new ClawDBRetriever({ client: db, topK: 10 }) // Persistent chat history across sessions const history = new ClawDBChatMessageHistory({ client: db, sessionId }) // Native tools the agent can call const tools = createClawDBTools(db) // remember_memory · search_memory · recall_memory
import { createClawDBAgentTools, withClawDBMemory } from '@clawdb/openai-agents' // Add tools to any OpenAI agent const tools = createClawDBAgentTools(db, { enableBranching: true }) // Or wrap the whole agent — auto memory in/out const agent = withClawDBMemory(myAgent, db) // Injects top memories before each run. // Stores user + assistant turns after.
import { clawdbTools, clawdbMiddleware } from '@clawdb/vercel-ai' // Vercel AI tools — generateText / streamText const result = await generateText({ model, tools: clawdbTools(db), prompt }) // Or automatic middleware — zero agent code changes const model = wrapLanguageModel({ model: openai('gpt-4o'), middleware: clawdbMiddleware(db) }) // ✓ Memory injected before · stored after every call
import { clawdbTools, handleClawDBToolCall, withClawDBMemory } from '@clawdb/anthropic' // Native Anthropic tool format for Claude const response = await anthropic.messages.create({ model: 'claude-sonnet-4-20250514', tools: clawdbTools(db), messages }) // Handle tool use blocks in one line const result = await handleClawDBToolCall(db, response) // ✓ clawdb_remember · clawdb_search · clawdb_recall
# One-command install for every AI editor: npx @clawdb/mcp-adapter --install-claude # Claude Desktop npx @clawdb/mcp-adapter --install-cursor # Cursor npx @clawdb/mcp-adapter --install-vscode # VS Code + Copilot npx @clawdb/mcp-adapter --install-continue # Continue.dev npx @clawdb/mcp-adapter --install-zed # Zed # Tools exposed to all hosts: # clawdb_remember · clawdb_search · clawdb_recall # clawdb_branch_fork · clawdb_branch_merge · clawdb_status
import { ClawDBPlugin, withClawDB } from '@clawdb/openclaw' // Plugin API — one line gives the agent a full database const agent = new OpenClawAgent({ plugins: [ClawDBPlugin({ autoStore: true, // store every exchange autoSearch: true, // inject memory context topK: 3, // memories per turn syncOnShutdown: true // persist to cloud })] }) // Or HOC API: const agent = withClawDB(new OpenClawAgent({...})) // ✓ Zero boilerplate. Fully automatic.
import { clawdbTools, handleClawDBFunctionCall, withClawDBMemory } from '@clawdb/google-genai' // Google GenAI FunctionDeclarations const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash', tools: [{ functionDeclarations: clawdbTools(db) }] }) // Handle function calls const result = await handleClawDBFunctionCall(db, call) // Or auto-memory wrapper const model = withClawDBMemory(generativeModel, db) // ✓ Works with Gemini 1.5, 2.0, and future models
// Works in any MCP-compatible editor — auto-detects config path // Cursor (~/.cursor/mcp.json): npx @clawdb/mcp-adapter --install-cursor // Zed (~/.config/zed/settings.json): npx @clawdb/mcp-adapter --install-zed // Print config for any host to paste manually: npx @clawdb/mcp-adapter --print-config --host cursor npx @clawdb/mcp-adapter --print-config --host vscode // ✓ Your coding agent now has persistent project memory
// go get github.com/Claw-DB/claw-sdk/sdks/go import clawdb "github.com/Claw-DB/claw-sdk/sdks/go" db, _ := clawdb.New(clawdb.Options{ AgentID: "my-agent", // Endpoint: auto-detected from CLAWDB_URL env var }) defer db.Close() id, _ := db.Memory.Remember(ctx, "deploy scheduled", nil) hits, _ := db.Memory.Search(ctx, "deployment", &clawdb.SearchOptions{TopK: 5}) // ✓ Full parity with TypeScript and Python SDKs
// Cargo.toml: clawdb-client = "0.1" use clawdb_client::ClawDBClient; let db = ClawDBClient::auto_provision().await?; // or: let db = ClawDBClient::builder() .endpoint("http://localhost:50050") .agent_id("my-agent") .build().await?; let id = db.memory().remember("deploy scheduled").await?; let hits = db.memory().search("deployment").top_k(5).call().await?; // ✓ Builder pattern · tokio async · full type safety
Each engine is a published Rust crate. Use them standalone or as the unified ClawDB runtime.
Microsecond-latency embedded SQLite engine. Zero network dependency. The agent's working memory.
HNSW + flat indexing, hybrid ANN + keyword retrieval. Local or hosted embedding models.
Intent-aware access control. Decisions evaluate task, role, scope, and risk — not just identity.
Fork database state, run experiments in isolation, merge what worked. Agents go deliberate.
Scheduled distillation: summarizes sessions, extracts facts, decays stale memory. Gets sharper.
CRDT replication with HLC ordering. Local-first, offline-capable, resumable, audit-chained.
TypeScript, Python, Rust, Go clients. Plus adapters for every major AI framework.
The unified aggregate runtime. All engines behind a single interface. Auto-provisions the right stack for your environment.
| Capability | Postgres | Pinecone | LangChain Mem | Mem0 | ClawDB |
|---|---|---|---|---|---|
| Local-first embeddedworks offline, no network required | — | — | ± | — | ✓ |
| Native semantic searchbuilt-in, no external vector DB | ± | ✓ | ± | ✓ | ✓ |
| Zero-config provisioningauto-selects backend for environment | — | — | — | — | ✓ |
| Branch & simulatefork state, experiment, merge results | — | — | — | — | ✓ |
| Auto-summarizationmemory improves over time automatically | — | — | — | ± | ✓ |
| Intent-aware policyaccess control that knows the agent's task | — | — | — | — | ✓ |
| Encrypted CRDT syncmulti-device, offline-first, auditable | — | — | — | — | ✓ |
| Framework adaptersLangChain, OpenAI, Vercel AI, MCP, Claude | — | ± | ✓ | ± | ✓ |
| Open sourceApache-2.0, self-hostable | ✓ | — | ✓ | ± | ✓ |
✓ full · ± partial · — not supported · based on publicly documented capabilities.
Start free. Pay only when you scale. The core engine is always open source.
Self-host the full engine. Perfect for side projects, prototypes, and solo agents.
Cloud sync for solo developers and small agents in production.
Production workloads with higher scale, team collaboration, and priority support.
Negotiated limits, dedicated infra, compliance, and premium SLAs for large organisations.
All plans include the full open-source engine. Cloud plans add managed hosting, sync, and support. View full feature comparison →
Built in public. Apache-2.0 forever. Cloud is the optional layer on top.
The fastest path from "I need memory" to "it works in production." Join early access or start building right now.
Apache-2.0 · Open source · YC S26 · No spam, unsubscribe any time