Context LatticeBy Private Memory Corp
System blueprint

Detailed architecture for write fanout, retrieval, and learning.

Context Lattice keeps one orchestrator control plane for all memory operations. Writes are validated, durably queued, and fanned out; reads are federated, reranked, and fed back into the learning schema.

Topology Map

Orchestrator-Centered Runtime

Ingress Chat apps CLI Webhook gateways Batch workers Trading agents Policy Layer Gateway auth Rate limiter Path + sandbox controls Secret isolation Strict mode policy Model + Tool Providers Ollama (Qwen) LM Studio OpenAI-compatible Custom URL Letta tools MCP Hub Langfuse MindsDB Provider trait swaps are config-only, no orchestrator rewrite required. Orchestrator Agent Loop Message in Memory recall context LLM call Tool execute Memory write Output Memory and Retrieval Subsystems Qdrant semantic index (local-first, optional cloud) Mongo raw event ledger (durability + replay source) Memory Bank MCP files (canonical project state) MindsDB SQL analytics and learned grouping Letta archival memory for long-horizon RAG support Retrieval merges these in parallel, then applies learning-based rerank. Operations and Safety Outbox queue with retries + dead-letter controls Backpressure and coalescing for burst control Retention sweeps + cold storage handoff Service update pipeline + launchd automation Prometheus, health probes, smoke validations Goal: keep throughput high without uncontrolled storage growth.

One orchestrator spine, explicit sink responsibilities, and built-in reliability controls for sustained write load.

Write lifecycle

Order of operations

  1. Ingress request accepted and normalized.
  2. Raw event persisted to Mongo first for durability.
  3. Memory-bank canonical write queued.
  4. Fanout outbox dispatches Qdrant, MindsDB, Letta, and observability sinks.
  5. Retries and backpressure absorb temporary sink failures.
Retrieval lifecycle

Order of operations

  1. Orchestrator receives query and embeds intent.
  2. Parallel retrieval from Qdrant, Mongo raw, MindsDB, Letta, memory-bank fallback.
  3. Learning schema reranks merged results using feedback signals.
  4. Context is returned with confidence and source metadata.
  5. Feedback is stored to improve future ranking quality.