Contextual continuity infrastructure for AI agents. Every new AI session starts at zero — no memory of past conversations, no access to your knowledge base, no awareness of your tools. agent-config solves this: when you switch agents, sessions, or even models, the same human's memory, knowledge, and work context carries over.
What this is NOT: This is not a prompt collection, not a LangChain-style tool-calling automation layer, not a multi-agent orchestration framework. It is the infrastructure that makes any AI agent — regardless of provider — remember who you are and what you've been working on.
The hardest problem in working with AI agents is not code generation — it's continuity. You build context over hours, then the session ends. Next session: blank slate. Switch from Claude to GPT: blank slate. Move from your laptop to your phone: blank slate.
agent-config attacks this with three layers:
-
Shared memory layer (andenken) — past conversations from every harness + 3,300+ personal notes in a semantically searchable index. Ask "보편 학문 관련 노트 찾아줘" and it finds
universalism-tagged notes without being told the English word. -
Shared skill set (27 skills) — the same capabilities (search notes, read bibliography, check git history, write to journal) available identically whether you're in pi, Claude Code, OpenCode, or OpenClaw.
-
Session continuity protocol —
/new+ recap + semantic search instead of expensive compact. Start a new session, recover full context in seconds for ~2K tokens instead of re-reading 50K.
The result: context survives across sessions, across harnesses, across models. One human's digital universe stays coherent no matter which AI is looking at it.
Part of the -config ecosystem by glg @junghan0611
Claude, GPT, and Gemini are "graduates from different schools" — trained on different data with different philosophies. Trying to control them means writing hundreds of lines of system prompts per model. Instead, throw one being-profile at all of them equally. They keep their unique lenses while aligning around a single universe — this is the Profile Harness.
Multi-harness support is a means, not the goal. The goal is a single 1KB being-profile that exerts the same gravitational pull across any harness.
| Harness | Memory | Skills | Config |
|---|---|---|---|
| pi + @benvargas/pi-claude-code-use | andenken extension (native registerTool, in-process LanceDB) |
26 skills (semantic-memory excluded — extension covers it) | current default Claude path in pi. Keeps the built-in anthropic provider and pi-owned tool execution while applying a small Claude Code compatibility patch |
| pi + claude-agent-sdk-pi | andenken extension (native registerTool, in-process LanceDB) |
26 skills (semantic-memory excluded — extension covers it) | vertical ACP path under active development. appendSystemPrompt: false — ~/.claude/ is the single source of truth |
| pi-entwurf (Oracle, tmux) | andenken extension + pi-telegram | 26 skills + Telegram bridge | persistent Opus session, @glg_entwurf_bot |
| Claude Code | andenken skill (CLI wrapper via bash) | 27 skills (full set including semantic-memory) | CLAUDE.md + hooks |
| OpenCode | andenken skill (CLI wrapper via bash) | 27 skills (full set) | settings |
| OpenClaw (Oracle VM) | andenken skill (same skills/ via symlink mount) | 27 skills (Nix store mount in Docker) | openclaw.json |
Session JSONL from all harnesses flows into andenken's unified index. Each chunk carries a source field ("pi" | "claude") so you can filter, compare, or roll back across harnesses.
Semantic Memory → andenken
Semantic memory has graduated to its own repo: andenken — "recollective thinking" (Heidegger).
| Tool | DB | Dims | Purpose |
|---|---|---|---|
session_search |
sessions.lance | 768d | Past pi + Claude Code conversations |
knowledge_search |
org.lance (707MB) | 768d | Org-mode knowledge base (3,300+ Denote notes) |
Agents call these autonomously. Ask "보편 학문 관련 노트 찾아줘" and knowledge_search fires with dictcli query expansion — finding universalism-tagged notes without being told the English word.
3-Layer Cross-Lingual Search:
| Layer | Mechanism | Example |
|---|---|---|
| 1. Embedding | Gemini multilingual vectors | "보편" ≈ "universalism" |
| 2. dblock | Denote regex link graph | 22 notes linked in meta-note |
| 1.5 BM25 | Korean josa removal (dual emit) | "위임의" → "위임" + "위임의" (BM25 both) |
| 3. dictcli | Personal vocabulary graph (2,400+ triples) | expand("보편") → [universal, universalism, paideia] |
Pi loads andenken as a compiled pi package (pi install), not a symlinked .ts file. This bypasses jiti parsing limitations and allows direct LanceDB access in-process. Claude Code and OpenCode use the CLI wrapper skill instead.
Pi Extensions (pi-extensions/)
| Extension | Purpose |
|---|---|
env-loader.ts |
Load ~/.env.local at session start |
context.ts |
/context command — show loaded extensions, skills, context usage |
go-to-bed.ts |
Late night reminder |
peon-ping.ts |
Sound notifications |
gemini-image-gen.ts |
Gemini image generation (nanobanana 2flash) |
delegate.ts |
Spawn independent agent process (local or SSH remote) |
session-breakdown.ts |
Session cost breakdown |
whimsical.ts |
Personality touches |
Semantic memory extension lives in andenken (separate repo, loaded as pi package).
Telegram bridge lives in entwurf (separate repo, loaded as pi package).
Production Telegram bridge uses pi-telegram (by pi author, pi install package) — queuing, file I/O, stop/abort, streaming preview.
MCP Servers (mcp/)
When Claude Code is the primary harness, it needs a way to reach other running sessions — pi sessions with steer/follow_up, other Claude Code instances, or future ACP agents. session-bridge is a lightweight MCP server that provides this via Unix domain sockets.
How it works:
Each Claude Code session spawns a session-bridge MCP server process. The server creates a Unix socket at ~/.claude/session-bridge/<session-id>.sock and registers an alias symlink from the session name (derived from CWD).
Claude Code (agent-config) Claude Code (cos)
├── session-bridge MCP ├── session-bridge MCP
│ socket: <uuid>.sock │ socket: <uuid>.sock
│ alias: agent-config.alias │ alias: cos.alias
│ │
└── send_message ──────────────────> └── messageQueue → receive_messages
Tools:
| Tool | Purpose | PI equivalent |
|---|---|---|
list_sessions |
Discover live sessions | list_sessions |
send_message |
Send to another session by name or ID | send_to_session (send) |
receive_messages |
Poll queued incoming messages | send_to_session (get_message) |
session_info |
This session's identity (ID + name) | — |
Key difference from PI: PI's control.ts can steer messages directly into the conversation loop. MCP servers cannot inject into Claude Code's conversation — receiving is poll-based. This means:
- Claude Code → PI: works seamlessly. PI receives via
steer, acts immediately. - Claude Code → Claude Code: sender fires and forgets. Receiver polls
receive_messageswhen ready. - PI → Claude Code: message queued, retrieved on next interaction.
In practice, the primary flow is Claude Code dispatching work to PI sessions (which have richer control), then checking results later. The loose coupling is intentional — "엉성한 결합" over tight orchestration.
Wire protocol is compatible with PI's control.ts: newline-delimited JSON over Unix sockets.
{"type":"send","message":"...","sender":"agent-config"}
{"type":"response","command":"send","success":true}Configuration: Registered in ~/.mcp.json. SESSION_NAME is auto-derived from the working directory by start.sh.
Relationship to ACP: session-bridge is horizontal (session ↔ session), ACP is vertical (pi → Claude Code as provider). They are complementary, not competing.
Skills (skills/) — 27 skills
| Category | Skills |
|---|---|
| Data Access | denotecli, bibcli, gitcli, lifetract, gogcli, ghcli, day-query |
| Agent Memory | session-recap, dictcli, semantic-memory, improve-agent |
| Writing | botlog, botment, agenda, punchout |
| Communication | slack-latest, jiracli, telegram |
| Web/Media | brave-search, browser-tools, youtube-transcript, medium-extractor, summarize, transcribe |
| Tools | emacs, tmux, diskspace |
Skill doc principle (LSP pattern): Agents don't read full docs. Each SKILL.md has a single API table at the top — function/command + args + example. English body, Korean description only. Target: <100 lines, <4KB. Like LSP autocomplete: see the signature, call immediately.
Pi Config (pi/)
| File | Purpose |
|---|---|
settings.json |
Default model, theme, thinking level |
keybindings.json |
Custom keybindings |
Themes (pi-themes/)
1 theme: glg-dark (custom, Ghostty Dracula compatible).
Commands (commands/)
| Command | Purpose |
|---|---|
/recap |
Quick recap of previous session |
/pandoc-html |
Markdown/Org → Google Docs HTML/DOCX |
We don't use compact. Compact = AI reads entire conversation and summarizes = expensive + slow.
Instead:
- When conversation gets long,
/newto start fresh /newauto-indexes current session + recent 24h sessions- In new session, recover context:
session-recap -p <repo> -m 15→ 4KB summary (instant)session_search→ semantic search (all sessions)knowledge_search→ org knowledge base (3-layer expansion)
A persistent pi session on Oracle VM, accessible via Telegram @glg_entwurf_bot. This is the always-on presence agent — a 분신(Entwurf) that maintains context across days.
Why this exists (2026-04-06): Anthropic blocked flat-rate API access for third-party apps (OpenClaw). OpenClaw bots switched to GitHub Copilot relay (github-copilot/claude-sonnet-4.6 for glg, github-copilot/claude-opus-4.6 for main). But a direct pi session on Oracle bypasses this entirely — Anthropic API direct, Opus, full skills, no intermediary.
| Component | Detail |
|---|---|
| tmux session | pi-entwurf |
| Model | claude-agent-sdk/claude-opus-4-6 |
| Telegram bot | @glg_entwurf_bot (pi-telegram bridge) |
| Working dir | ~ (home) |
| Skills | Full 26 skills + semantic memory |
| Role | Life-support agent, research, note-taking, agenda |
Two Telegram bridges coexist:
| Bridge | Package | Purpose |
|---|---|---|
| pi-telegram | pi install (production) |
Queuing, file I/O, stop, streaming preview |
| entwurf | local package (minimal) | Presence bridge philosophy, --telegram flag |
OpenClaw vs pi-entwurf:
| OpenClaw bots | pi-entwurf | |
|---|---|---|
| Runtime | Docker sandbox | NixOS host direct |
| Model routing | GitHub Copilot relay | Anthropic API direct |
| Multi-bot | 4 bots (main/glg/gpt/gemini) | 1 persistent session |
| Skills | Same 27 skills (mounted) | Same 26 skills (native) |
| Use case | Family/public service | Personal deep work |
# Claude Code + Telegram bridge
alias claude-tg='claude --channels plugin:telegram@claude-plugins-official'
alias claude-tgd='claude --channels plugin:telegram@claude-plugins-official --dangerously-skip-permissions'
# pi: --session-control default (async delegate notifications + inter-session RPC)
alias pi='command pi --session-control'
# Entwurf agent: Telegram bridge (requires entwurf package)
alias pi-home='command pi --session-control --telegram'git clone https://github.com/junghan0611/agent-config.git
cd agent-config
./run.sh setup # clone repos + build CLIs + symlink everything + npm install
./run.sh env # verify: system, API keys, links, binaries, memory index./run.sh setup does:
- Clone source repos (if missing) — including andenken and third-party
pi-packages - Build 6 native CLI binaries (Go + GraalVM)
- Symlink: pi extensions + skills (semantic-memory excluded) + themes + settings + keybindings
- Install: andenken as pi package (compiled extension)
- Symlink: Claude Code + OpenCode skills (full set including semantic-memory) + prompts
- Symlink: ~/.local/bin PATH binaries
- npm install for extensions and skills
| Repo | Layer | Description |
|---|---|---|
| nixos-config | OS | NixOS flakes, hardware, services |
| doomemacs-config | Editor | Doom Emacs, org-mode, denote |
| zotero-config | Bibliography | 8,000+ references, bibcli |
| agent-config | Agent infra | Extensions, skills, themes, settings |
| andenken | Memory | Semantic memory — sessions + org knowledge base |
| entwurf | Presence | Telegram bridge — Entwurf minimal presence bridge |
| pi-telegram | Transport | Telegram DM bridge — production queue/file/streaming |
| @benvargas/pi-claude-code-use | Provider patch | Default Claude path in pi today — keep anthropic provider, keep pi tool execution, patch payloads minimally for Claude Code-style subscription use |
| claude-agent-sdk-pi | Provider (parallel path) | Vertical ACP path — Claude Code as actual engine, pi as harness. Not the default Claude path in pi/settings.json today |
| memex-kb | Knowledge | Legacy document conversion pipeline |
| GLG-Mono | Orchestration | OpenClaw bot configurations |
| geworfen | Being | Existence data viewer — WebTUI agenda |
| CLI | Repo | Language | Purpose |
|---|---|---|---|
| denotecli | junghan0611/denotecli | Go | Denote knowledge base search (3000+ notes) |
| gitcli | junghan0611/gitcli | Go | Local git commit timeline (50+ repos) |
| lifetract | junghan0611/lifetract | Go | Samsung Health + aTimeLogger tracking |
| dictcli | junghan0611/dictcli | Clojure/GraalVM | Personal vocabulary graph (2,400+ triples) |
| bibcli | junghan0611/zotero-config | Go | BibTeX search (8,000+ entries) |
| Repo | Note |
|---|---|
| pi-skills | Migrated to agent-config/skills/ |
Why trust agent intuition over documentation? When an agent calls emacsclient -s server and fails because the right socket is agent-server, that's not the agent's fault — the naming violated intuition. We renamed: agent daemon → server (default, intuitive), user's GUI Emacs → user (human bears the non-obvious name). This principle applies everywhere: if an agent fails once, it's a naming/design problem, not a docs problem.
Why andenken as separate repo? Semantic memory serves pi, Claude Code, and future agents. It's not pi-specific. Data (LanceDB) lives with the code, not in ~/.pi/agent/memory/. Pi gets a compiled extension (native tools, in-process); other harnesses get a CLI skill (same search quality, subprocess overhead).
Why no compact? /new + semantic search = instant + cheaper. Session JSONL is written in real-time; /new hook auto-indexes.
Why Gemini Embedding 2? taskType, batchEmbedContents, Matryoshka outputDimensionality 768d. OpenClaw upstream tracking.
Why rate limiter 3s? We hit a ₩100,000 (~$69) embedding cost bomb on 2026-03-30. Multiple --force org indexing runs against the Gemini API. Added 4 safety layers: 3s rate limiter, cost estimator (estimate.ts), $1 abort threshold, removed auto-indexing on /new. 3s is conservative but intentional — 4 minutes of slow sync beats another $69 bill.
Why MMR over Jina Rerank? Jina hurts Korean+English mixed docs. Jaccard-based MMR is free, local, better.
Why Korean josa removal in BM25? Korean particles ("의", "에서", "으로") break BM25 token matching. Dual emit indexes both original and particle-stripped text. 17x score improvement.
Why dictcli expand? "보편" alone gives MRR 0.13. With expand → "보편 universal universalism paideia" → MRR jumps.
| Without semantic search | With semantic search | |
|---|---|---|
| "What did we discuss about X?" | grep → read × 5-10 → 50K tokens | 1 tool call → 2K tokens |
| "What did I do last session?" | raw JSONL read → 100KB | session-recap → 4KB |
Total indexing: ~$0.13. Each query: effectively free.
MIT