Skip to content
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
28cb11d
feat: multi-peer group chat support with review fixes
Apr 6, 2026
fdecfd2
fix: address CodeRabbit review feedback
Apr 6, 2026
fbef6b2
Merge branch 'main' into feat/multi-peer-with-fixes
ajspig Apr 9, 2026
e798223
fix: restore cli.ts to upstream main, keep only -p/--peer flag additions
Apr 12, 2026
71cca27
merge: incorporate upstream v1.3.2 (configurable timeout, multi-agent…
Apr 12, 2026
01222c2
fix: rename humanPeer → participantPeer
ajspig Apr 15, 2026
2e17e68
fix: removing peer mapping in favor of a more comprehensive solution
ajspig Apr 15, 2026
7cb869d
fix: extract sender_id from inbound message in before_prompt_build
ajspig Apr 15, 2026
a25ef7e
feat: adding multi-peer support
ajspig Apr 15, 2026
96f3ca2
fix: adding peer mappings to ~./honcho
ajspig Apr 24, 2026
d430714
fix: add peer mapping to gateway log
ajspig Apr 24, 2026
417e8c7
Merge origin/main into feat/multi-peer-support
ajspig Apr 24, 2026
5805aa2
chore: language cleanup
ajspig Apr 24, 2026
e03518a
fix: owner_id fallback for legacy and new installs
ajspig Apr 27, 2026
a6314e4
fix: cr suggestion, harden Honcho error handling
ajspig Apr 27, 2026
03f6dc2
docs: update readme
ajspig Apr 27, 2026
2f8e77f
chore: organizing tests
ajspig Apr 27, 2026
3bfc9e1
fix: pruned participantSenderIds plural field
ajspig Apr 27, 2026
3660356
fix: console.warn
ajspig Apr 27, 2026
1c1805c
fix: Flushes now re-read openclaw-peers.json and merge it with memory…
ajspig Apr 27, 2026
23a5859
fix: simplifying cleaning message object
ajspig Apr 28, 2026
e962fac
fix: Reviewing peer ID resolution in the openclaw-honcho plugin
ajspig Apr 28, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,13 +110,26 @@ Honcho's `observeOthers` controls whether a peer forms representations of other

Set `ownerObserveOthers: true` to let the owner peer also observe agent messages. This gives Honcho perspective-aware memory: the owner stores conclusions about the agent based only on what it witnessed, enabling the user's representation to reflect the full conversational context rather than just their own side of it.

### Multi-Peer Participants

In group chats (Discord, Slack, etc.), the plugin extracts the sender's platform ID from each inbound message and uses it directly as the Honcho peer ID. This gives every participant — humans and any other bots in the room — their own memory and representation in Honcho, rather than attributing all non-agent messages to a single generic peer.

**How it works:**
- The plugin reads the `sender_id` field from OpenClaw's "Conversation info (untrusted metadata):" block, which OpenClaw injects on every inbound message that has a known sender — including 1-on-1 DMs on platforms like Telegram, not just group chats.
- Each distinct sender ID becomes its own Honcho peer (e.g., `U07KX7DG002` becomes the Honcho peer ID directly).
- The default `owner` peer is only used as a fallback when a message has no sender metadata at all (e.g., synthetic/system messages, or channel integrations that don't emit a `Conversation info` block). On platforms like Telegram, even DMs are attributed to the sender's own peer, not `owner`.
- Each OpenClaw agent gets its own Honcho peer (default `agent-{id}`, e.g., `agent-main`).
- All tools (`honcho_context`, `honcho_ask`, etc.) automatically resolve the correct peer for the current session.

**One-turn warmup for context injection in brand-new sessions.** Message *attribution* (capture) works correctly from the first turn — `extractSenderId` reads `sender_id` from each message's metadata block and routes to the right peer regardless of session state. *Context injection*, however, runs at `before_prompt_build` and looks up the session's primary participant peer via `resolveSessionParticipantPeer`, which reads `participantSenderId` from session metadata. On a brand-new session there is no session metadata yet, so the first turn's prompt context is built for the default `owner` peer. From the second turn onward (after capture writes `participantSenderId`), context is built for the resolved sender. Sessions whose channel never emits sender metadata (no `Conversation info` block) stay attributed to `owner`.

## How it works

Once installed, the plugin works automatically:

- **Message Observation** — After every AI turn, the conversation is persisted to Honcho. Both user and agent messages are observed, allowing Honcho to build and refine its models. Message capture starts when the plugin is active for a session, and preserves original timestamps for captured messages. Messages are also flushed before session compaction and `/new`/`/reset`, so no conversation data is lost.
- **Tool-Based Context Access** — The AI can query Honcho mid-conversation using tools like `honcho_context`, `honcho_search_conclusions`, and `honcho_ask` to retrieve relevant context about the user. Context is injected during OpenClaw's `before_prompt_build` phase, ensuring accurate turn boundaries.
- **Dual Peer Model** — Honcho maintains separate representations: one for the user (preferences, facts, communication style) and one for the agent (personality, learned behaviors). Each OpenClaw agent gets its own Honcho peer (`agent-{id}`), so multi-agent workspaces maintain isolated memory.
- **Multi-Peer Model** — Honcho maintains separate representations for each participant. Whenever an inbound message carries a `sender_id` (group chats, and DMs on platforms like Telegram), that sender gets their own peer, using their platform ID directly as the Honcho peer ID. Each OpenClaw agent gets its own Honcho peer (default `agent-{id}`). The default `owner` peer is only used as a fallback when a channel emits no sender metadata. This gives every participant isolated, personalized memory.
- **Clean Persistence** — Platform metadata (conversation info, sender headers, thread context, forwarded messages) is stripped before saving to Honcho, ensuring only meaningful content is persisted. Noise messages (heartbeat acks, cron boilerplate, startup commands) are dropped entirely via configurable pattern filters.

Honcho handles all reasoning and synthesis in the cloud.
Expand Down
12 changes: 8 additions & 4 deletions commands/cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -442,11 +442,13 @@ export function registerCli(api: OpenClawPluginApi, state: PluginState): void {
.command("ask <question>")
.description("Ask Honcho about the user")
.option("-a, --agent <id>", "Agent ID to query as (default: primary agent)")
.action(async (question: string, options: { agent?: string }) => {
.option("-p, --peer <id>", "Channel peer ID or Honcho peer ID to target (default: owner)")
.action(async (question: string, options: { agent?: string; peer?: string }) => {
try {
await state.ensureInitialized();
const agentPeer = await state.getAgentPeer(options.agent ?? state.resolveDefaultAgentId());
const answer = await agentPeer.chat(question, { target: state.ownerPeer! });
const participantPeer = await state.getParticipantPeer(options.peer);
const answer = await agentPeer.chat(question, { target: participantPeer });
console.log(answer ?? "No information available.");
} catch (error) {
console.error(`Failed to query: ${error}`);
Expand All @@ -458,10 +460,12 @@ export function registerCli(api: OpenClawPluginApi, state: PluginState): void {
.description("Semantic search over Honcho memory")
.option("-k, --top-k <number>", "Number of results to return", "10")
.option("-d, --max-distance <number>", "Maximum semantic distance (0-1)", "0.5")
.action(async (query: string, options: { topK: string; maxDistance: string }) => {
.option("-p, --peer <id>", "Channel peer ID or Honcho peer ID to target (default: owner)")
.action(async (query: string, options: { topK: string; maxDistance: string; peer?: string }) => {
try {
await state.ensureInitialized();
const representation = await state.ownerPeer!.representation({
const participantPeer = await state.getParticipantPeer(options.peer);
const representation = await participantPeer.representation({
searchQuery: query,
searchTopK: parseInt(options.topK, 10),
searchMaxDistance: parseFloat(options.maxDistance),
Expand Down
82 changes: 82 additions & 0 deletions helpers.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
import { describe, expect, it } from "vitest";
import { extractSenderId } from "./helpers.js";

const SENTINEL = "Conversation info (untrusted metadata):";

function metadataBlock(payload: Record<string, unknown>): string {
return [
SENTINEL,
"```json",
JSON.stringify(payload, null, 2),
"```",
].join("\n");
}

describe("extractSenderId", () => {
it("reads sender_id from a leading metadata block", () => {
const content = [
metadataBlock({ sender_id: "U01ZB5DG019", channel: "C-foo" }),
"",
"hello there",
].join("\n");

expect(extractSenderId(content)).toBe("U01ZB5DG019");
});

it("trusts only the first sentinel and never considers later quoted blocks", () => {
// First sentinel resolves — second block (user-pasted) must be ignored.
const trusted = [
metadataBlock({ sender_id: "U-trusted" }),
"",
"look at this thing they quoted at me:",
"",
metadataBlock({ sender_id: "U-spoofed" }),
].join("\n");

expect(extractSenderId(trusted)).toBe("U-trusted");

// First sentinel is malformed (no fenced json) — the duplicate-sentinel
// guard then refuses to trust the later block.
const poisoned = [
SENTINEL,
"(not a fenced json block)",
"",
metadataBlock({ sender_id: "U-spoofed" }),
].join("\n");

expect(extractSenderId(poisoned)).toBeUndefined();
});

it("returns undefined on malformed JSON inside the metadata block", () => {
const content = [
SENTINEL,
"```json",
"{ this is : not, valid json",
"```",
"",
"body",
].join("\n");

expect(extractSenderId(content)).toBeUndefined();
});

it("prefers sender_id when both sender_id and sender are present", () => {
const content = metadataBlock({
sender_id: "U-primary",
sender: "U-legacy",
});

expect(extractSenderId(content)).toBe("U-primary");
});

it("falls back to sender when sender_id is absent", () => {
const content = metadataBlock({ sender: "U-legacy" });

expect(extractSenderId(content)).toBe("U-legacy");
});

it("returns undefined when the content has no metadata block", () => {
expect(extractSenderId("just a normal DM")).toBeUndefined();
expect(extractSenderId("")).toBeUndefined();
});
});
73 changes: 62 additions & 11 deletions helpers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,49 @@ export function cleanMessageContent(content: string): string {
return cleaned.trim();
}

const CONVERSATION_INFO_SENTINEL = "Conversation info (untrusted metadata):";

/**
* Extract the sender_id from a raw message's "Conversation info (untrusted metadata):"
* metadata block. Must be called BEFORE cleanMessageContent() which strips these blocks.
* Returns undefined for DMs (no metadata block) or on parse failure.
*
* Only considers the FIRST occurrence of the sentinel to prevent user-pasted or quoted
* metadata blocks from poisoning sender attribution.
*/
export function extractSenderId(content: string): string | undefined {
if (!content || !content.includes(CONVERSATION_INFO_SENTINEL)) return undefined;

const lines = content.split("\n");
let found = false;
for (let i = 0; i < lines.length; i++) {
if (lines[i].trim() !== CONVERSATION_INFO_SENTINEL) continue;
if (found) return undefined; // Ignore duplicate sentinels (likely user-pasted content)
found = true;
if (lines[i + 1]?.trim() !== "```json") continue;

// Collect JSON lines between ```json and ```
const jsonLines: string[] = [];
for (let j = i + 2; j < lines.length; j++) {
if (lines[j].trim() === "```") break;
jsonLines.push(lines[j]);
}

try {
const parsed = JSON.parse(jsonLines.join("\n"));
// Try sender_id first, fall back to sender
const id = parsed.sender_id ?? parsed.sender;
if (typeof id === "string" && id.length > 0) {
return id;
}
} catch {
// Malformed JSON — return undefined
}
return undefined;
}
return undefined;
}

/**
* Returns true if the message should be dropped entirely.
* Patterns starting with "/" are treated as anchored regexes (e.g. "/^HEARTBEAT/i").
Expand All @@ -167,9 +210,10 @@ export function shouldSkipMessage(content: string, noisePatterns: string[]): boo

export function extractMessages(
rawMessages: unknown[],
ownerPeer: Peer,
defaultParticipantPeer: Peer,
agentPeer: Peer,
noisePatterns: string[] = []
noisePatterns: string[] = [],
resolvePeer?: (senderId: string) => Peer | undefined,
): MessageInput[] {
const result: MessageInput[] = [];

Expand All @@ -180,11 +224,12 @@ export function extractMessages(

if (role !== "user" && role !== "assistant") continue;

let content = "";
// Extract raw content before cleaning
let rawContent = "";
if (typeof m.content === "string") {
content = m.content;
rawContent = m.content;
} else if (Array.isArray(m.content)) {
content = m.content
rawContent = m.content
.filter(
(block: unknown) =>
typeof block === "object" &&
Expand All @@ -196,17 +241,23 @@ export function extractMessages(
.join("\n");
}

content = cleanMessageContent(content);
// For user messages, extract sender ID before cleaning strips metadata
let peer: Peer;
if (role === "user") {
const senderId = extractSenderId(rawContent);
peer = (senderId && resolvePeer?.(senderId)) || defaultParticipantPeer;
} else {
peer = agentPeer;
}

let content = cleanMessageContent(rawContent);
content = content.trim();

if (!content) continue;
if (shouldSkipMessage(content, noisePatterns)) continue;

if (content) {
const peer = role === "user" ? ownerPeer : agentPeer;
const ts = typeof m.timestamp === "number" ? new Date(m.timestamp) : undefined;
result.push(peer.message(content, ts ? { createdAt: ts } : undefined));
}
const ts = typeof m.timestamp === "number" ? new Date(m.timestamp) : undefined;
result.push(peer.message(content, ts ? { createdAt: ts } : undefined));
}

return result;
Expand Down
116 changes: 104 additions & 12 deletions hooks/capture.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,32 @@ import {
buildSessionKey,
isSubagentSession,
extractMessages,
extractSenderId,
} from "../helpers.js";
import { subagentParentMap } from "./subagent.js";

/**
* Extract raw text content from a message object (before cleaning).
*/
function getRawContent(msg: unknown): string {
if (!msg || typeof msg !== "object") return "";
const m = msg as Record<string, unknown>;
if (typeof m.content === "string") return m.content;
if (Array.isArray(m.content)) {
return m.content
.filter(
(block: unknown) =>
typeof block === "object" &&
block !== null &&
(block as Record<string, unknown>).type === "text"
)
.map((block: unknown) => (block as Record<string, unknown>).text)
.filter((t): t is string => typeof t === "string")
.join("\n");
}
return "";
}

/**
* Core message capture logic shared by agent_end, before_compaction, and before_reset.
* Returns the number of new messages saved (or 0 if none).
Expand Down Expand Up @@ -55,30 +78,99 @@ async function flushMessages(
const lastSavedIndex = Math.min(Math.max(rawLastSavedIndex, 0), messages.length);
const startIndex = Math.max(turnStartIndex, lastSavedIndex);

const peerConfigs: Array<[string, { observeMe: boolean; observeOthers: boolean }]> = [
[OWNER_ID, { observeMe: true, observeOthers: state.cfg.ownerObserveOthers }],
[agentPeer.id, { observeMe: true, observeOthers: true }],
];
if (messages.length <= startIndex) {
return 0;
}

const newRawMessages = messages.slice(startIndex);

// Pre-resolve participant peers for all unique sender IDs in this batch
const senderIds = new Set<string>();
let lastSenderId: string | undefined;
let userMsgCount = 0;
for (const msg of newRawMessages) {
if (!msg || typeof msg !== "object") continue;
const m = msg as Record<string, unknown>;
if (m.role !== "user") continue;
userMsgCount++;
const rawContent = getRawContent(msg);
const senderId = extractSenderId(rawContent);
if (senderId) {
senderIds.add(senderId);
lastSenderId = senderId;
} else {
const hasConvInfo = rawContent.includes("Conversation info (untrusted metadata):");
api.logger.debug?.(`[honcho] User message without sender_id (hasConvInfo=${hasConvInfo}, contentLen=${rawContent.length})`);
}
}
if (senderIds.size > 0) {
api.logger.debug?.(`[honcho] Resolved ${senderIds.size} unique sender(s) from ${userMsgCount} user message(s)`);
}

// Parallel peer resolution — avoids sequential await bottleneck in group chats.
const resolvedPeers = new Map<string, Awaited<ReturnType<typeof state.getParticipantPeer>>>();
const senderIdArray = [...senderIds];
const peers = await Promise.all(senderIdArray.map((id) => state.getParticipantPeer(id)));
for (let i = 0; i < senderIdArray.length; i++) {
resolvedPeers.set(senderIdArray[i], peers[i]);
}

const defaultParticipantPeer = await state.getParticipantPeer();

// Build peer configs: default owner + all resolved participant peers + agent + parent
const peerConfigMap = new Map<string, { observeMe: boolean; observeOthers: boolean }>();
peerConfigMap.set(OWNER_ID, { observeMe: true, observeOthers: state.cfg.ownerObserveOthers });
for (const [, peer] of resolvedPeers) {
if (peer.id !== OWNER_ID) {
peerConfigMap.set(peer.id, { observeMe: true, observeOthers: state.cfg.ownerObserveOthers });
}
}
peerConfigMap.set(agentPeer.id, { observeMe: true, observeOthers: true });
if (parentPeer) {
peerConfigs.push([parentPeer.id, { observeMe: false, observeOthers: true }]);
peerConfigMap.set(parentPeer.id, { observeMe: false, observeOthers: true });
}

const peerConfigs = Array.from(peerConfigMap.entries()) as Array<
[string, { observeMe: boolean; observeOthers: boolean }]
>;
await session.addPeers(peerConfigs);

if (messages.length <= startIndex) {
return 0;
}
const extracted = extractMessages(
newRawMessages,
defaultParticipantPeer,
agentPeer,
state.cfg.noisePatterns,
(senderId) => resolvedPeers.get(senderId),
);

const newRawMessages = messages.slice(startIndex);
const extracted = extractMessages(newRawMessages, state.ownerPeer!, agentPeer, state.cfg.noisePatterns);
// Store sender IDs in session metadata for tool resolution.
// participantSenderId = last active sender (default for tools).
// participantSenderIds = all known senders in this session (for future multi-target tools).
// Named "sender" (not "peer") to distinguish raw channel IDs from resolved Honcho peer IDs.
const previousSenderIds: string[] = Array.isArray(existingMeta.participantSenderIds)
? (existingMeta.participantSenderIds as string[])
: [];
const allSenderIds = [...new Set([...previousSenderIds, ...senderIds])];

const updatedMeta: Record<string, unknown> = {
...existingMeta,
...sessionMeta,
lastSavedIndex: messages.length,
};
if (lastSenderId) {
updatedMeta.participantSenderId = lastSenderId;
}
if (allSenderIds.length > 0) {
updatedMeta.participantSenderIds = allSenderIds;
}

if (extracted.length === 0) {
await session.setMetadata({ ...existingMeta, ...sessionMeta, lastSavedIndex: messages.length });
await session.setMetadata(updatedMeta);
return 0;
}

await session.addMessages(extracted);
await session.setMetadata({ ...existingMeta, ...sessionMeta, lastSavedIndex: messages.length });
await session.setMetadata(updatedMeta);
return extracted.length;
}

Expand Down
Loading
Loading