feat: per-agent workspace routing via workspaceMapping config#32
feat: per-agent workspace routing via workspaceMapping config#32bernesto wants to merge 4 commits intoplastic-labs:mainfrom
Conversation
Enables multi-org isolation where agents with different ID prefixes are routed to separate Honcho workspaces and separate SDK client instances. Changes: - config.ts: add optional workspaceMapping field (agentId prefix -> workspaceId) - state.ts: per-workspace WorkspaceState with lazy client creation, promise-based init locks, resolveWorkspace(), getHonchoClient(), getOwnerPeer() - hooks/capture.ts: use workspace-resolved client and ownerPeer - hooks/context.ts: use workspace-resolved client and ownerPeer - openclaw.plugin.json: add workspaceMapping to configSchema and uiHints Backward compatible: when workspaceMapping is not configured, behavior is identical to stock single-workspace behavior.
WalkthroughIntroduces workspace-scoped routing and state: agents map to Honcho workspaces via prefix, per-workspace Honcho clients and init locks are cached, hooks use workspace-scoped clients/owner peers, message persistence is batched and bounded by a configurable maxBackfill, and config gains workspaceMapping and maxBackfill fields. Changes
Sequence DiagramsequenceDiagram
participant Hook as Hook (capture / context)
participant State as PluginState
participant Resolver as Workspace Resolver
participant Cache as WorkspaceState Cache
participant Honcho as Honcho SDK
participant Peer as Peer Store
Hook->>State: resolveWorkspace(agentId)
State->>Resolver: longest-prefix match against cfg.workspaceMapping
Resolver-->>State: workspaceId
Hook->>State: getHonchoClient(workspaceId)
State->>Cache: lookup/create WorkspaceState
alt cached
Cache-->>State: return existing Honcho client
else new
Cache->>Honcho: instantiate client for workspaceId
Honcho-->>Cache: client ready
end
State-->>Hook: Honcho client
Hook->>State: ensureInitialized(workspaceId)
State->>Cache: acquire per-workspace initLock
Cache->>Honcho: initialize owner/agent peers & metadata
Honcho-->>Cache: initialization data
Cache-->>State: init complete
State-->>Hook: ready
Hook->>Honcho: create session / addMessages (batched)
Honcho->>Peer: persist messages (per-batch)
Peer-->>Honcho: ack
Honcho-->>Hook: result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Honcho's API rejects message arrays with >100 items. Without batching, sessions that accumulate >100 messages per turn fail permanently — lastSavedIndex never advances, causing cascading retry failures on every subsequent turn. Batch size: 50 (safe margin under the 100 limit). Progress: lastSavedIndex advances after each successful batch, so partial saves are preserved if a later batch fails.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
hooks/capture.ts (1)
25-37:⚠️ Potential issue | 🟠 MajorUse the parent route for subagent captures.
parentAgentIdis already available here, butworkspaceIdandagentPeerstill derive from the child ID. If the child ID doesn’t inherit the org prefix, this writes the subagent session into the default workspace and can attachparentPeer.idfrom a different workspace. Resolve fromparentAgentId ?? agentId, then look up the child peer in that same workspace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/capture.ts` around lines 25 - 37, parentAgentId is available but workspaceId, honcho, and agentPeer are resolved from the child agentId; change the resolution to use const resolvedForWorkspace = parentAgentId ?? agentId and call state.resolveWorkspace(resolvedForWorkspace) and state.getHonchoClient(...) and state.ensureInitialized(...) with that value so the workspace (and honcho client) is the parent’s when present, then look up the child peer with state.getAgentPeer(agentId) (and resolve parentPeer using parentAgentId when isSubagent) so both peers are resolved within the same workspace context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config.ts`:
- Around line 74-86: The workspaceMapping parsing silently drops invalid
entries; update the cfg.workspaceMapping handling (the workspaceMapping variable
and the loop over Object.entries(cfg.workspaceMapping)) to trim both prefix and
wsId, validate that after trimming both are non-empty strings, and if any entry
fails validation throw an error (or otherwise fail-fast) so malformed
prefixes/IDs aren’t ignored and mistakenly fall back to the default workspaceId;
only after all entries pass validation populate and return the mapping object.
In `@hooks/context.ts`:
- Around line 11-23: The workspace resolution and initialization should use the
parent agent when present: compute const effectiveAgentId = parentAgentId ??
agentId (pull parentAgentId from the session/context where subagent handling
stores it), then call state.resolveWorkspace(effectiveAgentId) and
state.ensureInitialized(workspaceId) using that workspaceId, and use that same
workspaceId when calling state.getHonchoClient(workspaceId) and
state.getAgentPeer(agentId) (keep getAgentPeer using the original child agentId
but lookup it up within the resolved parent workspace) so memory/prompts are
resolved against the parent/org workspace rather than the default workspace.
In `@state.ts`:
- Around line 143-159: resolveWorkspace currently matches the raw agentId
against workspaceMapping, causing inconsistencies with getAgentPeer which
lowercases/trims IDs; canonicalize the incoming agentId in resolveWorkspace
(trim and lowercase using the same rules as getAgentPeer) before running the
prefix matching loop so prefix comparisons and matchedPrefix length checks
operate on the normalized string while returning the associated wsId
(matchedWorkspace) unchanged.
- Around line 209-237: The current pattern creates a separate promise
(ws.initLock) and always calls resolveLock() in finally, which masks errors from
the initializer; change ws.initLock to be the actual async initialization
promise so rejections propagate to all awaiters: replace the manual new
Promise/resolveLock pattern with ws.initLock = (async () => { const honcho =
ws.honcho; const wsMeta = await honcho.getMetadata(); ... ws.ownerPeer = await
honcho.peer(OWNER_ID, { metadata: {} }); ws.initialized = true; })(); and keep a
finally that clears ws.initLock = null only after the async IIFE settles, so
callers awaiting ws.initLock see the original rejection instead of being
unblocked via resolveLock.
- Around line 107-125: The agent-scoped tools (tools/message-search.ts,
tools/session.ts, tools/ask.ts) must resolve and initialize the agent's
workspace and use workspace-specific accessors instead of default-scoped
properties: call const wsId = state.resolveWorkspace(toolCtx.agentId); then
await state.ensureInitialized(wsId); and replace uses of state.ownerPeer and
state.honcho (and any direct reads of state.* for owner/honcho) with
state.getOwnerPeer(wsId) and state.getHonchoClient(wsId) (or the workspace state
obtained via state.resolveWorkspace/getWorkspaceState if available); update the
referenced call sites in message-search (lines ~70,93,98), session (lines
~54,59,64) and ask (lines ~34,39) accordingly.
---
Outside diff comments:
In `@hooks/capture.ts`:
- Around line 25-37: parentAgentId is available but workspaceId, honcho, and
agentPeer are resolved from the child agentId; change the resolution to use
const resolvedForWorkspace = parentAgentId ?? agentId and call
state.resolveWorkspace(resolvedForWorkspace) and state.getHonchoClient(...) and
state.ensureInitialized(...) with that value so the workspace (and honcho
client) is the parent’s when present, then look up the child peer with
state.getAgentPeer(agentId) (and resolve parentPeer using parentAgentId when
isSubagent) so both peers are resolved within the same workspace context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0dd1d6d7-acd7-4c79-93a3-f60223a86ded
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
config.tshooks/capture.tshooks/context.tsopenclaw.plugin.jsonstate.ts
| // Parse optional workspace mapping (agentId prefix → workspaceId) | ||
| let workspaceMapping: Record<string, string> | undefined; | ||
| if (cfg.workspaceMapping && typeof cfg.workspaceMapping === "object" && !Array.isArray(cfg.workspaceMapping)) { | ||
| const mapping: Record<string, string> = {}; | ||
| for (const [prefix, wsId] of Object.entries(cfg.workspaceMapping as Record<string, unknown>)) { | ||
| if (typeof prefix === "string" && prefix.length > 0 && typeof wsId === "string" && wsId.length > 0) { | ||
| mapping[prefix] = wsId; | ||
| } | ||
| } | ||
| if (Object.keys(mapping).length > 0) { | ||
| workspaceMapping = mapping; | ||
| } | ||
| } |
There was a problem hiding this comment.
Fail fast on malformed workspaceMapping entries.
This silently drops bad prefixes/workspace IDs. Because unmatched agents fall back to workspaceId, a typo here routes traffic back into the shared default workspace—the opposite of the isolation this feature is adding. Trim and reject invalid entries during parse instead of ignoring them.
🛠️ Suggested hardening
- for (const [prefix, wsId] of Object.entries(cfg.workspaceMapping as Record<string, unknown>)) {
- if (typeof prefix === "string" && prefix.length > 0 && typeof wsId === "string" && wsId.length > 0) {
- mapping[prefix] = wsId;
- }
- }
+ for (const [rawPrefix, rawWsId] of Object.entries(cfg.workspaceMapping as Record<string, unknown>)) {
+ const prefix = rawPrefix.trim().toLowerCase();
+ const wsId = typeof rawWsId === "string" ? rawWsId.trim() : "";
+ if (!prefix || !wsId) {
+ throw new Error(`Invalid workspaceMapping entry for prefix "${rawPrefix}"`);
+ }
+ mapping[prefix] = wsId;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Parse optional workspace mapping (agentId prefix → workspaceId) | |
| let workspaceMapping: Record<string, string> | undefined; | |
| if (cfg.workspaceMapping && typeof cfg.workspaceMapping === "object" && !Array.isArray(cfg.workspaceMapping)) { | |
| const mapping: Record<string, string> = {}; | |
| for (const [prefix, wsId] of Object.entries(cfg.workspaceMapping as Record<string, unknown>)) { | |
| if (typeof prefix === "string" && prefix.length > 0 && typeof wsId === "string" && wsId.length > 0) { | |
| mapping[prefix] = wsId; | |
| } | |
| } | |
| if (Object.keys(mapping).length > 0) { | |
| workspaceMapping = mapping; | |
| } | |
| } | |
| // Parse optional workspace mapping (agentId prefix → workspaceId) | |
| let workspaceMapping: Record<string, string> | undefined; | |
| if (cfg.workspaceMapping && typeof cfg.workspaceMapping === "object" && !Array.isArray(cfg.workspaceMapping)) { | |
| const mapping: Record<string, string> = {}; | |
| for (const [rawPrefix, rawWsId] of Object.entries(cfg.workspaceMapping as Record<string, unknown>)) { | |
| const prefix = rawPrefix.trim().toLowerCase(); | |
| const wsId = typeof rawWsId === "string" ? rawWsId.trim() : ""; | |
| if (!prefix || !wsId) { | |
| throw new Error(`Invalid workspaceMapping entry for prefix "${rawPrefix}"`); | |
| } | |
| mapping[prefix] = wsId; | |
| } | |
| if (Object.keys(mapping).length > 0) { | |
| workspaceMapping = mapping; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config.ts` around lines 74 - 86, The workspaceMapping parsing silently drops
invalid entries; update the cfg.workspaceMapping handling (the workspaceMapping
variable and the loop over Object.entries(cfg.workspaceMapping)) to trim both
prefix and wsId, validate that after trimming both are non-empty strings, and if
any entry fails validation throw an error (or otherwise fail-fast) so malformed
prefixes/IDs aren’t ignored and mistakenly fall back to the default workspaceId;
only after all entries pass validation populate and return the mapping object.
| const agentId = ctx.agentId ?? state.resolveDefaultAgentId(); | ||
| const isSubagent = isSubagentSession(ctx); | ||
|
|
||
| // Resolve workspace and get the correct Honcho client for this agent | ||
| const workspaceId = state.resolveWorkspace(agentId); | ||
| const honcho = state.getHonchoClient(workspaceId); | ||
|
|
||
| state.turnStartIndex.set(sessionKey, event.messages.length); | ||
|
|
||
| try { | ||
| await state.ensureInitialized(); | ||
| await state.ensureInitialized(workspaceId); | ||
| const agentPeer = await state.getAgentPeer(agentId); | ||
| const ownerPeer = state.getOwnerPeer(workspaceId)!; |
There was a problem hiding this comment.
Route subagent context through the parent’s workspace.
This resolves the workspace from ctx.agentId only. hooks/subagent.ts (Lines 17-32) already records the parent agent for child sessions; if a spawned subagent ID doesn’t carry the parent’s prefix, before_prompt_build pulls memory from the default workspace instead of the org workspace. Resolve/init with parentAgentId ?? agentId, and keep the child peer lookup pinned to that same workspace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hooks/context.ts` around lines 11 - 23, The workspace resolution and
initialization should use the parent agent when present: compute const
effectiveAgentId = parentAgentId ?? agentId (pull parentAgentId from the
session/context where subagent handling stores it), then call
state.resolveWorkspace(effectiveAgentId) and
state.ensureInitialized(workspaceId) using that workspaceId, and use that same
workspaceId when calling state.getHonchoClient(workspaceId) and
state.getAgentPeer(agentId) (keep getAgentPeer using the original child agentId
but lookup it up within the resolved parent workspace) so memory/prompts are
resolved against the parent/org workspace rather than the default workspace.
| const state: PluginState = { | ||
| honcho: defaultHoncho, | ||
| cfg, | ||
| // These proxy to the default workspace state for backward compat | ||
| get ownerPeer() { return workspaceStates.get(cfg.workspaceId)!.ownerPeer; }, | ||
| set ownerPeer(v) { workspaceStates.get(cfg.workspaceId)!.ownerPeer = v; }, | ||
| get agentPeers() { return workspaceStates.get(cfg.workspaceId)!.agentPeers; }, | ||
| get agentPeerMap() { return workspaceStates.get(cfg.workspaceId)!.agentPeerMap; }, | ||
| set agentPeerMap(v) { workspaceStates.get(cfg.workspaceId)!.agentPeerMap = v; }, | ||
| turnStartIndex: new Map<string, number>(), | ||
| get initialized() { return workspaceStates.get(cfg.workspaceId)!.initialized; }, | ||
| set initialized(v) { workspaceStates.get(cfg.workspaceId)!.initialized = v; }, | ||
| api, | ||
| ensureInitialized, | ||
| getAgentPeer, | ||
| resolveDefaultAgentId, | ||
| resolveWorkspace, | ||
| getHonchoClient, | ||
| getOwnerPeer, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C3 --type=ts '\bstate\.ensureInitialized\s*\(\s*\)|\bstate\.ownerPeer\b|\bstate\.honcho\b'Repository: plastic-labs/openclaw-honcho
Length of output: 7626
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check tools/session.ts to see if it's agent-scoped
echo "=== tools/session.ts ==="
cat -n tools/session.ts | head -70
echo -e "\n=== tools/message-search.ts (relevant lines) ==="
cat -n tools/message-search.ts | head -120
echo -e "\n=== tools/ask.ts (relevant lines) ==="
cat -n tools/ask.ts | head -50
echo -e "\n=== tools/context.ts (relevant lines) ==="
cat -n tools/context.ts | head -50Repository: plastic-labs/openclaw-honcho
Length of output: 12776
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check state.ts for workspace resolution and getHonchoClient
echo "=== state.ts - key sections ==="
cat -n state.ts | head -130
echo -e "\n=== Looking for getHonchoClient usage ==="
rg -n 'getHonchoClient' --type=tsRepository: plastic-labs/openclaw-honcho
Length of output: 6469
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Read more of state.ts to see function implementations
echo "=== state.ts - function implementations ==="
cat -n state.ts | sed -n '128,250p'Repository: plastic-labs/openclaw-honcho
Length of output: 4940
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check tools/search.ts to see if it's agent-scoped
echo "=== tools/search.ts (first 50 lines) ==="
cat -n tools/search.ts | head -50Repository: plastic-labs/openclaw-honcho
Length of output: 2248
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if there are any other agent-scoped tool registrations we might have missed
echo "=== Checking all tool registrations ==="
rg -n 'api\.registerTool' --type=ts -A 2
# Also verify if there are any usages in hooks or other contexts
echo -e "\n=== Non-tool usages of state.honcho and state.ownerPeer ==="
rg -n 'state\.honcho|state\.ownerPeer' --type=ts | grep -E '(hooks|commands)' | head -20Repository: plastic-labs/openclaw-honcho
Length of output: 1220
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check tools/memory-passthrough.ts to see what it does
echo "=== tools/memory-passthrough.ts ==="
cat -n tools/memory-passthrough.ts | head -30
# Also double-check the exact lines mentioned in the review for confirmation
echo -e "\n=== Exact lines from tools/message-search.ts (70-99) ==="
cat -n tools/message-search.ts | sed -n '70,99p'
echo -e "\n=== Exact lines from tools/session.ts (54-67) ==="
cat -n tools/session.ts | sed -n '54,67p'Repository: plastic-labs/openclaw-honcho
Length of output: 3147
Migrate remaining agent-scoped tools to workspace-specific accessors.
tools/message-search.ts (lines 70, 93, 98), tools/session.ts (lines 54, 59, 64), and tools/ask.ts (lines 34, 39) are agent-scoped tools that currently call state.ensureInitialized() without passing the agent's workspace and use state.ownerPeer/state.honcho (default workspace only). For workspaceMapping configurations, this breaks these tools for agents mapped to non-default workspaces.
Migrate by:
- Resolving the agent's workspace:
const wsId = state.resolveWorkspace(toolCtx.agentId); - Initializing the workspace:
await state.ensureInitialized(wsId); - Using workspace-specific APIs:
state.getHonchoClient(wsId)andstate.getOwnerPeer(wsId)(or resolve directly fromgetWorkspaceStateif implementation is updated to expose it)
Non-agent-scoped tools like tools/search.ts and tools/context.ts correctly use defaults.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@state.ts` around lines 107 - 125, The agent-scoped tools
(tools/message-search.ts, tools/session.ts, tools/ask.ts) must resolve and
initialize the agent's workspace and use workspace-specific accessors instead of
default-scoped properties: call const wsId =
state.resolveWorkspace(toolCtx.agentId); then await
state.ensureInitialized(wsId); and replace uses of state.ownerPeer and
state.honcho (and any direct reads of state.* for owner/honcho) with
state.getOwnerPeer(wsId) and state.getHonchoClient(wsId) (or the workspace state
obtained via state.resolveWorkspace/getWorkspaceState if available); update the
referenced call sites in message-search (lines ~70,93,98), session (lines
~54,59,64) and ask (lines ~34,39) accordingly.
| function resolveWorkspace(agentId: string): string { | ||
| const mapping = cfg.workspaceMapping; | ||
| if (!mapping || Object.keys(mapping).length === 0) { | ||
| return cfg.workspaceId; | ||
| } | ||
|
|
||
| let matchedPrefix = ""; | ||
| let matchedWorkspace = cfg.workspaceId; | ||
|
|
||
| for (const [prefix, wsId] of Object.entries(mapping)) { | ||
| if (agentId.startsWith(prefix) && prefix.length > matchedPrefix.length) { | ||
| matchedPrefix = prefix; | ||
| matchedWorkspace = wsId; | ||
| } | ||
| } | ||
|
|
||
| return matchedWorkspace; |
There was a problem hiding this comment.
Canonicalize agentId before prefix matching.
getAgentPeer() lowercases/trims the ID before routing, but resolveWorkspace() matches the raw string. hooks/context.ts and hooks/capture.ts call resolveWorkspace(ctx.agentId) first, so NR_sales or "nr_sales " can initialize/session the default workspace while getAgentPeer() resolves the mapped one.
🔧 Suggested fix
function resolveWorkspace(agentId: string): string {
+ const id = (agentId || resolveDefaultAgentId()).toLowerCase().trim() || "main";
const mapping = cfg.workspaceMapping;
if (!mapping || Object.keys(mapping).length === 0) {
return cfg.workspaceId;
}
@@
- for (const [prefix, wsId] of Object.entries(mapping)) {
- if (agentId.startsWith(prefix) && prefix.length > matchedPrefix.length) {
+ for (const [prefix, wsId] of Object.entries(mapping)) {
+ if (id.startsWith(prefix) && prefix.length > matchedPrefix.length) {
matchedPrefix = prefix;
matchedWorkspace = wsId;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@state.ts` around lines 143 - 159, resolveWorkspace currently matches the raw
agentId against workspaceMapping, causing inconsistencies with getAgentPeer
which lowercases/trims IDs; canonicalize the incoming agentId in
resolveWorkspace (trim and lowercase using the same rules as getAgentPeer)
before running the prefix matching loop so prefix comparisons and matchedPrefix
length checks operate on the normalized string while returning the associated
wsId (matchedWorkspace) unchanged.
| // If init is already in progress for this workspace, wait for it | ||
| if (ws.initLock) { | ||
| await ws.initLock; | ||
| return; | ||
| } | ||
|
|
||
| state.ownerPeer = await honcho.peer(OWNER_ID, { metadata: {} }); | ||
| state.initialized = true; | ||
| // Acquire the init lock | ||
| let resolveLock!: () => void; | ||
| ws.initLock = new Promise<void>((resolve) => { resolveLock = resolve; }); | ||
|
|
||
| try { | ||
| const honcho = ws.honcho; | ||
| const wsMeta = await honcho.getMetadata(); | ||
| ws.agentPeerMap = (wsMeta.agentPeerMap as Record<string, string>) ?? {}; | ||
|
|
||
| const defaultId = resolveDefaultAgentId(); | ||
| if (Object.keys(ws.agentPeerMap).length === 0) { | ||
| ws.agentPeerMap[defaultId] = `agent-${defaultId}`; | ||
| await honcho.setMetadata({ ...wsMeta, agentPeerMap: ws.agentPeerMap }); | ||
| } else if (Object.values(ws.agentPeerMap).includes(LEGACY_PEER_ID) && !ws.agentPeerMap[defaultId]) { | ||
| ws.agentPeerMap[defaultId] = LEGACY_PEER_ID; | ||
| await honcho.setMetadata({ ...wsMeta, agentPeerMap: ws.agentPeerMap }); | ||
| } | ||
|
|
||
| ws.ownerPeer = await honcho.peer(OWNER_ID, { metadata: {} }); | ||
| ws.initialized = true; | ||
| } finally { | ||
| resolveLock(); | ||
| ws.initLock = null; |
There was a problem hiding this comment.
Propagate initialization failures to concurrent callers.
ws.initLock is always resolved in finally. If the first initializer throws, waiters hit the if (ws.initLock) branch, return success, and then continue against an uninitialized workspace. Make initLock the actual init promise so the same rejection reaches every waiter.
🧵 Suggested fix
// If init is already in progress for this workspace, wait for it
if (ws.initLock) {
await ws.initLock;
return;
}
- // Acquire the init lock
- let resolveLock!: () => void;
- ws.initLock = new Promise<void>((resolve) => { resolveLock = resolve; });
-
- try {
+ ws.initLock = (async () => {
const honcho = ws.honcho;
const wsMeta = await honcho.getMetadata();
ws.agentPeerMap = (wsMeta.agentPeerMap as Record<string, string>) ?? {};
@@
}
ws.ownerPeer = await honcho.peer(OWNER_ID, { metadata: {} });
ws.initialized = true;
+ })();
+
+ try {
+ await ws.initLock;
} finally {
- resolveLock();
ws.initLock = null;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@state.ts` around lines 209 - 237, The current pattern creates a separate
promise (ws.initLock) and always calls resolveLock() in finally, which masks
errors from the initializer; change ws.initLock to be the actual async
initialization promise so rejections propagate to all awaiters: replace the
manual new Promise/resolveLock pattern with ws.initLock = (async () => { const
honcho = ws.honcho; const wsMeta = await honcho.getMetadata(); ... ws.ownerPeer
= await honcho.peer(OWNER_ID, { metadata: {} }); ws.initialized = true; })();
and keep a finally that clears ws.initLock = null only after the async IIFE
settles, so callers awaiting ws.initLock see the original rejection instead of
being unblocked via resolveLock.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hooks/capture.ts`:
- Around line 29-33: The session is created in workspaceId but parentPeer is
later resolved from parentAgentId without verifying workspace affinity; change
the logic around resolving/adding parentPeer so you only call
state.resolveWorkspace(parentAgentId) and fetch/add parentPeer when that result
=== workspaceId (i.e., same tenant); otherwise do not call
state.getHonchoClient/get peer or attach a peer object to the session — include
only the parentAgentId in the metadata. Apply the same guard where parentPeer is
handled elsewhere (around the code referenced at lines near 48), ensuring any
call sites that resolve or attach parentPeer perform the same workspace equality
check against workspaceId before fetching peers or mutating the session.
- Around line 85-97: The loop advances lastSavedIndex using extracted count
(saved += batch.length) which is wrong if extraction drops items; change it to
advance using raw-message offsets preserved on each extracted item: ensure
extractMessages stores the original raw offset (e.g., originalIndex or
rawOffset) on each entry, then in the batching loop compute the progressIndex
from the last raw offset in the batch (e.g., const lastRaw =
batch[batch.length-1].rawOffset; const progressIndex = startIndex + lastRaw + 1)
and pass that to session.setMetadata instead of using saved/batch.length; keep
using session.addMessages and session.setMetadata as shown but base
lastSavedIndex on raw offsets.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| // Resolve workspace and get the correct Honcho client for this agent | ||
| const workspaceId = state.resolveWorkspace(agentId); | ||
| const honcho = state.getHonchoClient(workspaceId); | ||
|
|
||
| await state.ensureInitialized(workspaceId); |
There was a problem hiding this comment.
Keep parentPeer in the same workspace as the routed session.
The session is now created in workspaceId, but parentPeer is still later resolved from parentAgentId and attached to that same session. If the parent and subagent prefixes map to different workspaces, this reconnects two tenant scopes in the capture path. Only fetch/add parentPeer when state.resolveWorkspace(parentAgentId) === workspaceId; otherwise keep just the parent agent ID in metadata.
🔧 Suggested guard
+ const parentWorkspaceId =
+ isSubagent && parentAgentId ? state.resolveWorkspace(parentAgentId) : null;
const parentPeer =
- isSubagent && parentAgentId && parentAgentId !== agentId
+ isSubagent &&
+ parentAgentId &&
+ parentAgentId !== agentId &&
+ parentWorkspaceId === workspaceId
? await state.getAgentPeer(parentAgentId)
: null;Also applies to: 48-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hooks/capture.ts` around lines 29 - 33, The session is created in workspaceId
but parentPeer is later resolved from parentAgentId without verifying workspace
affinity; change the logic around resolving/adding parentPeer so you only call
state.resolveWorkspace(parentAgentId) and fetch/add parentPeer when that result
=== workspaceId (i.e., same tenant); otherwise do not call
state.getHonchoClient/get peer or attach a peer object to the session — include
only the parentAgentId in the metadata. Apply the same guard where parentPeer is
handled elsewhere (around the code referenced at lines near 48), ensuring any
call sites that resolve or attach parentPeer perform the same workspace equality
check against workspaceId before fetching peers or mutating the session.
The Honcho SDK defaults to 5s timeout. With 50-message batches plus ensureInitialized() overhead, the first save after a long session consistently times out. 30s gives adequate headroom for the init chain + batch write without being so long it blocks the event loop.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
state.ts (1)
211-240:⚠️ Potential issue | 🔴 CriticalPropagate initialization failures to concurrent callers.
The current pattern always resolves
initLockinfinally, even when initialization throws. Concurrent waiters (line 213) unblock and return, proceeding withws.ownerPeer === nullandws.initialized === false. This can cause null-pointer errors in subsequent operations.🔧 Suggested fix
// If init is already in progress for this workspace, wait for it if (ws.initLock) { await ws.initLock; return; } - // Acquire the init lock - let resolveLock!: () => void; - ws.initLock = new Promise<void>((resolve) => { resolveLock = resolve; }); - - try { + ws.initLock = (async () => { const honcho = ws.honcho; const wsMeta = await honcho.getMetadata(); ws.agentPeerMap = (wsMeta.agentPeerMap as Record<string, string>) ?? {}; const defaultId = resolveDefaultAgentId(); if (Object.keys(ws.agentPeerMap).length === 0) { ws.agentPeerMap[defaultId] = `agent-${defaultId}`; await honcho.setMetadata({ ...wsMeta, agentPeerMap: ws.agentPeerMap }); } else if (Object.values(ws.agentPeerMap).includes(LEGACY_PEER_ID) && !ws.agentPeerMap[defaultId]) { ws.agentPeerMap[defaultId] = LEGACY_PEER_ID; await honcho.setMetadata({ ...wsMeta, agentPeerMap: ws.agentPeerMap }); } ws.ownerPeer = await honcho.peer(OWNER_ID, { metadata: {} }); ws.initialized = true; + })(); + + try { + await ws.initLock; } finally { - resolveLock(); ws.initLock = null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@state.ts` around lines 211 - 240, The init lock currently always resolves in the finally block causing concurrent waiters to proceed even if init failed; change the promise creation to capture both resolve and reject (e.g., let resolveLock, rejectLock; ws.initLock = new Promise<void>((res, rej) => { resolveLock = res; rejectLock = rej; })), then on successful initialization (after setting ws.ownerPeer and ws.initialized) call resolveLock(), and in a catch block call rejectLock(error) to propagate the failure to awaiters; keep clearing ws.initLock = null in finally so the lock is removed after success or failure. Use the existing symbols ws.initLock, resolveLock/rejectLock, honcho.getMetadata, ws.ownerPeer and ws.initialized to locate where to add the resolve/reject calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@state.ts`:
- Around line 211-240: The init lock currently always resolves in the finally
block causing concurrent waiters to proceed even if init failed; change the
promise creation to capture both resolve and reject (e.g., let resolveLock,
rejectLock; ws.initLock = new Promise<void>((res, rej) => { resolveLock = res;
rejectLock = rej; })), then on successful initialization (after setting
ws.ownerPeer and ws.initialized) call resolveLock(), and in a catch block call
rejectLock(error) to propagate the failure to awaiters; keep clearing
ws.initLock = null in finally so the lock is removed after success or failure.
Use the existing symbols ws.initLock, resolveLock/rejectLock,
honcho.getMetadata, ws.ownerPeer and ws.initialized to locate where to add the
resolve/reject calls.
Prevents overwhelming Honcho when a session has a large unsaved backlog. If unsaved messages exceed maxBackfill, older messages are skipped and lastSavedIndex advances to only save the most recent N. Set to 0 to never backfill (only current turn onward). Default: 100.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
hooks/capture.ts (2)
29-38:⚠️ Potential issue | 🟠 MajorStill gate
parentPeerbyworkspaceId.Line 37 can still resolve a parent peer from a different workspace than Line 30 selected for this session. Since
getAgentPeer()is workspace-aware now, the later peer attach andparentPeerIdmetadata can either fail or stitch two tenant scopes back together. Only fetch/addparentPeerwhenstate.resolveWorkspace(parentAgentId) === workspaceId; otherwise keep justparentAgentIdin metadata.Suggested guard
+ const parentWorkspaceId = + isSubagent && parentAgentId ? state.resolveWorkspace(parentAgentId) : null; const parentPeer = - isSubagent && parentAgentId && parentAgentId !== agentId + isSubagent && + parentAgentId && + parentAgentId !== agentId && + parentWorkspaceId === workspaceId ? await state.getAgentPeer(parentAgentId) : null;Also applies to: 40-45, 66-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/capture.ts` around lines 29 - 38, The code fetches parentPeer without ensuring the parentAgentId is in the same workspace as the session, which can cross tenant boundaries; change the parentPeer resolution so you first compare state.resolveWorkspace(parentAgentId) to the already-resolved workspaceId and only call state.getAgentPeer(parentAgentId) and attach parentPeer metadata when they match; otherwise keep parentAgentId in metadata but do not call state.getAgentPeer or attach parentPeerId. Ensure this same guarded check is applied in the other similar spots (around the blocks referenced by lines ~40-45 and ~66-67) and reference variables/functions workspaceId, parentAgentId, parentPeer, state.resolveWorkspace, state.getAgentPeer, and isSubagent when implementing the guard.
80-85:⚠️ Potential issue | 🟠 MajorStill checkpoint progress in raw-message offsets.
Line 113 still uses
startIndex + saved, butsavedcounts extracted messages, not raw transcript positions. That already breaks whenextractMessages()filters entries, and with Line 83 it can now even movelastSavedIndexbackward beloweffectiveStartIndexafter older backlog was intentionally skipped. If a later batch fails, the retry can resend or skip transcript chunks. Base the checkpoint on the last raw offset in each batch, anchored toeffectiveStartIndex(or batch raw windows directly).Also applies to: 94-96, 103-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/capture.ts` around lines 80 - 85, The checkpoint logic incorrectly advances lastSavedIndex using startIndex + saved where saved counts extracted messages, not raw transcript positions; update the code paths around effectiveStartIndex, extractMessages(), saved and the session.setMetadata calls to compute the checkpoint from raw offsets instead: determine the last raw message index in the processed batch (e.g., effectiveStartIndex + indexOfLastRawInBatch or compute lastRawOffset from messages array relative to effectiveStartIndex) and set lastSavedIndex to that raw offset (never move it backward below effectiveStartIndex); apply this change to the places that update session.setMetadata (the block after skipping backlog and the later updates that use startIndex + saved) so retries are anchored to raw transcript positions rather than the extracted count.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@hooks/capture.ts`:
- Around line 29-38: The code fetches parentPeer without ensuring the
parentAgentId is in the same workspace as the session, which can cross tenant
boundaries; change the parentPeer resolution so you first compare
state.resolveWorkspace(parentAgentId) to the already-resolved workspaceId and
only call state.getAgentPeer(parentAgentId) and attach parentPeer metadata when
they match; otherwise keep parentAgentId in metadata but do not call
state.getAgentPeer or attach parentPeerId. Ensure this same guarded check is
applied in the other similar spots (around the blocks referenced by lines ~40-45
and ~66-67) and reference variables/functions workspaceId, parentAgentId,
parentPeer, state.resolveWorkspace, state.getAgentPeer, and isSubagent when
implementing the guard.
- Around line 80-85: The checkpoint logic incorrectly advances lastSavedIndex
using startIndex + saved where saved counts extracted messages, not raw
transcript positions; update the code paths around effectiveStartIndex,
extractMessages(), saved and the session.setMetadata calls to compute the
checkpoint from raw offsets instead: determine the last raw message index in the
processed batch (e.g., effectiveStartIndex + indexOfLastRawInBatch or compute
lastRawOffset from messages array relative to effectiveStartIndex) and set
lastSavedIndex to that raw offset (never move it backward below
effectiveStartIndex); apply this change to the places that update
session.setMetadata (the block after skipping backlog and the later updates that
use startIndex + saved) so retries are anchored to raw transcript positions
rather than the extracted count.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d0d1b739-4c10-49de-a8f5-84a17112ab4f
📒 Files selected for processing (2)
config.tshooks/capture.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- config.ts
ajspig
left a comment
There was a problem hiding this comment.
Thanks @bernesto! Workspace isolation is the right call for multi-tenant deployments. Reviewed the code + CodeRabbit findings:
Blocking (must fix before merge):
-
initLock swallows errors on concurrent callers (state.ts). CodeRabbit's async-IIFE suggestion is the right pattern.
-
resolveWorkspace() doesn't canonicalize agentId but getAgentPeer() does. A case mismatch silently routes to the wrong workspace. Normalize before prefix matching.
-
Agent-scoped tools (message-search, session, ask) still use state.honcho / state.ownerPeer from the default workspace. These need to resolve via getHonchoClient(wsId) and getOwnerPeer(wsId) like the hooks do. Without this, tools are broken for any non-default workspace agent.
-
lastSavedIndex checkpoint tracks extracted count, not raw offsets. If noise filtering drops messages, the index drifts.
Suggestion:
5. Nice-to-have: Consider guarding parentPeer lookup so it only fetches when the parent resolves to the same workspace. Low probability in practice but defensive.
6. Consider splitting maxBackfill + batched writes into a separate PR. Easier to review and bisect independently.
Happy to re-review once those are addressed.
Summary
Adds per-agent workspace routing to support multi-org deployments where agents with different ID prefixes should store memory in separate Honcho workspaces.
Motivation
In multi-tenant OpenClaw deployments, agents from different organizations (e.g.
nr_supportfor Neoreef,hb_coofor HER Beauty) share one Honcho workspace. This means Honcho's reasoning engine derives conclusions from all conversations across all orgs, leading to cross-org data leakage — an agent from one org can surface insights derived from another org's conversations.Workspace-level isolation is the correct boundary. This PR adds a config-driven mapping from agent ID prefixes to Honcho workspace IDs.
Changes
Config (
config.ts)workspaceMapping: Record<string, string>field (agent ID prefix → workspace ID)State (
state.ts)WorkspaceStatetype tracks per-workspace:honchoclient,ownerPeer,agentPeers,agentPeerMap, init state + promise-based lockresolveWorkspace(agentId)— longest-prefix match against mapping, fallback to defaultworkspaceIdgetHonchoClient(workspaceId)— lazily creates and caches SDK clients per workspaceensureInitialized(workspaceId?)— serialized per-workspace to prevent concurrent init stormsgetAgentPeer(agentId?)— routes to the correct workspace before resolving the peerHooks (
capture.ts,context.ts)Plugin manifest (
openclaw.plugin.json)workspaceMappingto configSchema and uiHintsExample Config
{ "config": { "workspaceId": "default", "workspaceMapping": { "nr_": "neoreef", "hb_": "her-beauty", "lc_": "lifecycle" } } }Agents matching a prefix route to that workspace. Everything else falls back to
workspaceId.Backward Compatibility
When
workspaceMappingis not configured, behavior is identical to the current release. No breaking changes.Summary by CodeRabbit