Context LatticeBy Private Memory Corp
Guide 3

Integration Guide

Connect ChatGPT app and Claude chat apps (desktop and web), plus Claude Code, Codex, and OpenClaw/ZeroClaw/IronClaw to Context Lattice once your stack is running locally.

Quick Path

Fastest route to a working integration

  1. Bring the stack up and verify /health and authenticated /status.
  2. Run the write/search smoke test to confirm end-to-end memory behavior.
  3. Set agent read-call timeout to match retrieval mode (fast: 25s, balanced: 60s, deep: 75s).
  4. Paste the human operator instruction block into your agent session.
  5. Map your client/tooling surface to /memory/write, /memory/search, and /tools/feedback_submit.
  6. Add HTTP/messaging app interfacing (claw-ready) via /integrations/messaging/* endpoints.
  7. For easy monitoring during integration, run gmake monitor-open (or gmake monitor-check).
Version clarity

Integration target for launch

  • Use this for production/public integration: v3.2 lane on http://127.0.0.1:8075.
  • Current public retrieval baseline: staged fast-return with async slow-source continuation and icm_spike memory-bank default.
  • v4 status: private tuning lane; do not treat v4-only experiments as public contract until promoted.
Prerequisite

Bring stack up first

Required

Use the launch mode you need, then validate core health before integrating any client.

BOOTSTRAP=1 scripts/first_run.sh
ORCH_KEY="$(awk -F= '/^CONTEXTLATTICE_ORCHESTRATOR_API_KEY=/{print substr($0,index($0,"=")+1)}' .env)"

# choose mode if needed
gmake mem-up
# gmake mem-up-lite
# gmake mem-up-full

curl -fsS http://127.0.0.1:8075/health | jq
curl -fsS -H "x-api-key: ${ORCH_KEY}" http://127.0.0.1:8075/status | jq
  • Orchestrator API: http://127.0.0.1:8075
  • MCP hub memory endpoint: http://127.0.0.1:53130/memorymcp/mcp
  • MCP hub qdrant endpoint: http://127.0.0.1:53130/qdrant/mcp
Quick wiring

5-minute integration smoke test

Required

Run this once to confirm your app can write and retrieve memory through Context Lattice before wiring UI-specific settings.

ORCH_KEY="$(awk -F= '/^CONTEXTLATTICE_ORCHESTRATOR_API_KEY=/{print substr($0,index($0,"=")+1)}' .env)"

curl -fsS -H "content-type: application/json" -H "x-api-key: ${ORCH_KEY}" \
  -d '{"projectName":"_global","fileName":"smoke/integration_check.md","content":"integration smoke ok"}' \
  http://127.0.0.1:8075/memory/write | jq

curl -fsS -H "content-type: application/json" -H "x-api-key: ${ORCH_KEY}" \
  -d '{"query":"integration smoke ok","limit":3}' \
  http://127.0.0.1:8075/memory/search | jq
  • If both calls return ok: true: your app can safely integrate.
  • If you get 401: verify CONTEXTLATTICE_ORCHESTRATOR_API_KEY and restart the caller process.
Human Operator Guide

Paste this instruction block into your agent chat

Required

Use this as your first message when starting a new ChatGPT/Claude/Codex/Claude Code session so the agent reliably uses Context Lattice.

You must use Context Lattice as the memory/context layer.

Runtime:
- Orchestrator: http://127.0.0.1:8075
- API key: CONTEXTLATTICE_ORCHESTRATOR_API_KEY from my local .env

Required behavior:
1) Before planning, call POST /memory/search with a compact query and project/topic filters.
2) During long tasks, checkpoint major decisions and outcomes with POST /memory/write.
3) Submit quality feedback with POST /tools/feedback_submit (use idempotencyKey).
4) Before final answer, run one more POST /memory/search for recency.
5) Keep writes compact (summaries, decisions, diffs), never dump full transcripts.
6) If /memory/* fails, continue task and report degraded memory mode explicitly.
  • Best practice: include projectName, fileName, and topic path on every write.
  • Timeout guidance: fast reads 25s, balanced reads 60s, deep/slow-source reads 75s.
  • Cache behavior: first deep read may be slow; retrying the same query often returns faster after staged fetch + cache warm.
  • Quality loop: ask user for a short “context quality” rating after key outputs, then write that feedback.
  • Goal: resolve an agentic issue once and persist reusable context for the team/organization.
Profile-aware preflight

Deep integration for Codex, Claude Code, OpenCode, Hermes, ChatGPT, Claude

Required

Use profile-aware preflight so each agent gets a stable agent_id, topic scope, and query defaults before work begins.

# generic profile-aware preflight
curl -fsS -H "content-type: application/json" -H "x-api-key: ${ORCH_KEY}" \
  -d '{"agent":"claude-code","project":"contextlattice"}' \
  http://127.0.0.1:8075/v1/agents/preflight | jq

# compatibility alias (codex)
curl -fsS -H "content-type: application/json" -H "x-api-key: ${ORCH_KEY}" \
  -d '{"project":"contextlattice"}' \
  http://127.0.0.1:8075/v1/codex/preflight | jq

# local helper script
python3 scripts/agent_orchestration.py preflight-agent claude-code contextlattice
python3 scripts/agent_orchestration.py preflight-agent opencode contextlattice
python3 scripts/agent_orchestration.py preflight-agent hermes-agent contextlattice
  • Profiles supported: codex, claude-code, opencode, hermes-agent, chatgpt-web, chatgpt-desktop, claude-web, claude-desktop.
  • Template path: docs/public_overview/templates/agents/ (copy-ready prompts and setup blocks).
  • Endpoint remains pinned: http://127.0.0.1:8075.
Messaging Surface

HTTP/messaging app interfacing (claw-ready)

Recommended

Context Lattice now supports channel command intake via orchestrator-native endpoints. The default handle is @ContextLattice.

POST /integrations/messaging/command
POST /integrations/messaging/openclaw
POST /integrations/messaging/ironclaw
POST /integrations/telegram/webhook
POST /integrations/slack/events

@ContextLattice remember deployment complete
@ContextLattice recall deployment
@ContextLattice status
Advanced local command endpoint test
# local direct test (secure default requires x-api-key)
curl -fsS -H "content-type: application/json" -H "x-api-key: ${ORCH_KEY}" \
  -d '{"channel":"openclaw","source_id":"chat-1","text":"@ContextLattice status"}' \
  http://127.0.0.1:8075/integrations/messaging/command | jq
  • BYO accounts: Telegram/Slack credentials stay in your own account.
  • Project routing: commands can include project=<name> and topic=<path> directives.
  • Default behavior: OpenClaw/ZeroClaw route directly; IronClaw is optional and feature-flagged.
Client Integrations

ChatGPT app, Claude chat apps, Claude Code, Codex

Required

ChatGPT apps (desktop + web)

For normal ChatGPT user apps or API-driven GPT clients, use Context Lattice as the memory sidecar and call orchestrator endpoints around message processing.

  • Persist memory on key state changes: POST /memory/write
  • Retrieve context before response generation: POST /memory/search
  • Submit outcome feedback for learning/rerank: POST /tools/feedback_submit
  • Persist browser snapshots for agent-visible pages: POST /memory/browser-context
  • Inspect active runner/tool contracts: GET /ops/capabilities
  • Refresh saved recall evaluation cases from live retrieval pathways: POST /memory/recall/eval-cases/refresh
  • Check runtime health: GET /health and GET /status with x-api-key

Claude chat apps (desktop + web)

Use desktop or browser Claude chat apps through MCP-compatible clients against the local MCP hub endpoint, then route high-value summaries through orchestrator writes.

  • MCP server URL: http://127.0.0.1:53130/memorymcp/mcp
  • Keep write payloads compact; avoid dumping full transcripts.
  • Use topic paths so retrieval stays scoped and fast.

Claude Code + Codex

Point your coding agent runtime at the same local stack and treat memory writes as explicit checkpoints.

export CONTEXTLATTICE_ORCHESTRATOR_URL=http://127.0.0.1:8075
export CONTEXTLATTICE_HTTP_URL=http://127.0.0.1:59081/mcp
export MCP_HUB_URL=http://127.0.0.1:53130/memorymcp/mcp
export CONTEXTLATTICE_ORCHESTRATOR_API_KEY="$(awk -F= '/^CONTEXTLATTICE_ORCHESTRATOR_API_KEY=/{print substr($0,index($0,"=")+1)}' .env)"

Pattern: write summaries after meaningful edits, fetch retrieval context before planning or review actions.

OpenClaw / ZeroClaw / IronClaw

Trait mapping and wiring

Recommended

Map OpenClaw/ZeroClaw/IronClaw memory traits directly to Context Lattice endpoints. Keep orchestrator as the single memory control plane.

Recommended mapping

  • memory_recall_ctxPOST /memory/search
  • memory_save_storePOST /memory/write
  • memory_feedback_submitPOST /tools/feedback_submit
  • messenger command hookPOST /integrations/messaging/openclaw
  • ironclaw command hookPOST /integrations/messaging/ironclaw
  • healthbeatGET /health and GET /status with x-api-key
  • tools_exec → MCP hub /memorymcp/mcp endpoint

Operational defaults

  • Keep Qdrant local-first with gRPC preferred.
  • Use BYO cloud keys only when explicitly enabled.
  • Preserve orchestrator fanout/backpressure defaults before aggressive tuning.
  • Strict security mode redacts and blocks suspected secrets on OpenClaw/ZeroClaw/IronClaw routes.
Web 3 ready

IronClaw compatibility mode

Advanced
Optional IronClaw compatibility mode

Context Lattice can expose an IronClaw-compatible command surface while keeping local-first orchestration and your existing sink stack unchanged.

# enable IronClaw bridge
IRONCLAW_INTEGRATION_ENABLED=true
IRONCLAW_DEFAULT_PROJECT=messaging

# keep strict secret protections on claw surfaces
MESSAGING_OPENCLAW_STRICT_SECURITY=true
  • Endpoint: POST /integrations/messaging/ironclaw
  • Security: suspected credentials are blocked on write and redacted in returned text/results.
  • Documentation fit: IronClaw's deep docs and ASCII architecture style map cleanly to this mode.