Human Operator Guide
Paste this instruction block into your agent chat
Required
Use this as your first message when starting a new ChatGPT/Claude/Codex/Claude Code session so the agent reliably uses Context Lattice.
You must use Context Lattice as the memory/context layer.
Runtime:
- Orchestrator: http://127.0.0.1:8075
- API key: CONTEXTLATTICE_ORCHESTRATOR_API_KEY from my local .env
Required behavior:
1) Before planning, call POST /memory/search with a compact query and project/topic filters.
2) During long tasks, checkpoint major decisions and outcomes with POST /memory/write.
3) Submit quality feedback with POST /tools/feedback_submit (use idempotencyKey).
4) Before final answer, run one more POST /memory/search for recency.
5) Keep writes compact (summaries, decisions, diffs), never dump full transcripts.
6) If /memory/* fails, continue task and report degraded memory mode explicitly.
- Best practice: include
projectName, fileName, and topic path on every write.
- Timeout guidance: fast reads
25s, balanced reads 60s, deep/slow-source reads 75s.
- Cache behavior: first deep read may be slow; retrying the same query often returns faster after staged fetch + cache warm.
- Quality loop: ask user for a short “context quality” rating after key outputs, then write that feedback.
- Goal: resolve an agentic issue once and persist reusable context for the team/organization.