Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
113 changes: 113 additions & 0 deletions examples/pydantic-ai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Honcho Memory Integration for Pydantic AI

Give your [Pydantic AI](https://ai.pydantic.dev) agents persistent memory using [Honcho](https://honcho.dev).

## Features

- **Persistent Memory**: Every conversation turn is saved to Honcho and automatically injected into the agent's system prompt on the next turn.
- **Natural Language Recall**: The agent can query Honcho's Dialectic API to answer questions like "What are my hobbies?" or "What did we talk about last time?"
- **Context Injection**: Conversation history is retrieved from Honcho and appended to the system prompt via `@agent.system_prompt`.
- **In-Session Coherence**: Pydantic AI's `message_history` parameter keeps the agent coherent within a single session, complementing Honcho's cross-session memory.

## Structure

```
pydantic-ai/
├── README.md
└── python/
├── main.py
├── pyproject.toml
└── tools/
├── client.py
├── save_memory.py
└── get_context.py
```

## Environment Variables

Create a `.env` file in the `python/` directory:

```env
HONCHO_API_KEY=your-honcho-api-key
HONCHO_WORKSPACE_ID=default
OPENAI_API_KEY=your-openai-api-key
```

Get your Honcho API key at [honcho.dev](https://honcho.dev).

## Installation

```bash
pip install pydantic-ai honcho-ai python-dotenv
```

Or with uv:

```bash
uv add pydantic-ai honcho-ai python-dotenv
```

## Quick Start

```python
import asyncio
from main import chat

async def main():
message_history = []
# First turn
response, message_history = await chat("alice", "I love hiking in the mountains", "session-1", message_history)
print(response)
# Second turn — history is threaded automatically
response, message_history = await chat("alice", "What do you remember about me?", "session-1", message_history)
print(response)

asyncio.run(main())
```

## Run the Demo

```bash
cd python
python main.py
```

## How It Works

### 1. Dynamic System Prompt

The `@agent.system_prompt` decorator registers `honcho_system_prompt()`, which is called by Pydantic AI before every LLM request. It fetches recent messages from Honcho and appends them to the system prompt:

```
You are a helpful assistant with persistent memory powered by Honcho.

## Conversation History
User: I love hiking
Assistant: That sounds wonderful! Do you have a favorite trail?
```

### 2. Memory Tool

The `@agent.tool` decorator registers `query_memory()`, which calls Honcho's Dialectic API. When the user asks "What do you remember about me?", the agent invokes this tool to query the semantic memory layer.

### 3. Message History Threading

`chat()` returns `(response, result.all_messages())`. Pass the returned history back on the next call to maintain in-session coherence. Honcho provides cross-session memory; `message_history` provides within-session context.

### 4. Auto-Save

The `chat()` function saves the user message before the agent runs and the assistant response after, keeping Honcho in sync with every turn.

## Concept Mapping

| Pydantic AI | Honcho |
|---|---|
| `deps.ctx.user_id` | Peer (human) |
| `deps.ctx.assistant_id` | Peer (agent) |
| `deps.ctx.session_id` | Session |
| `message_history` | In-session context |
| Agent input | Message |

## License

AGPL-3.0-or-later
152 changes: 152 additions & 0 deletions examples/pydantic-ai/python/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
"""Pydantic AI + Honcho persistent memory integration.

Demonstrates a conversational agent that remembers users across sessions.
Honcho stores every message and builds a long-term representation of the user;
the agent injects that context into its system prompt on every turn and can
query memory on demand via the ``query_memory`` tool.

Usage:
python main.py

Environment variables:
HONCHO_API_KEY Required. Your Honcho API key from honcho.dev.
HONCHO_WORKSPACE_ID Optional. Workspace ID (default: "default").
OPENAI_API_KEY Required. Your OpenAI API key.
"""

from __future__ import annotations

import asyncio
from dataclasses import dataclass

from pydantic_ai import Agent, RunContext
from pydantic_ai.messages import ModelMessage

from tools.client import HonchoContext, get_client
from tools.get_context import get_context
from tools.save_memory import save_memory


@dataclass
class HonchoAgentDeps:
"""Dependencies injected into every Pydantic AI agent call.

Attributes:
ctx: Honcho identity for the current conversation turn.
"""

ctx: HonchoContext


honcho_agent: Agent[HonchoAgentDeps, str] = Agent(
"openai:gpt-4.1-mini",
deps_type=HonchoAgentDeps,
result_type=str,
system_prompt=(
"You are a helpful assistant with persistent memory powered by Honcho. "
"You remember users across conversations. "
"When a user asks what you remember about them, use the query_memory tool."
),
)


@honcho_agent.system_prompt
def honcho_system_prompt(run_ctx: RunContext[HonchoAgentDeps]) -> str:
"""Append Honcho conversation history to the system prompt.

Called by Pydantic AI before every LLM request. Returns an additional
system-prompt segment containing the recent session history fetched from
Honcho. Returns an empty string when the session has no history yet.

Args:
run_ctx: The run context exposing ``HonchoAgentDeps``.

Returns:
A formatted history string, or ``""`` if no history exists.
"""
history = get_context(run_ctx.deps.ctx, tokens=2000)
if not history:
return ""
formatted = "\n".join(f"{m['role'].title()}: {m['content']}" for m in history)
return f"\n\n## Conversation History\n{formatted}"


@honcho_agent.tool
def query_memory(run_ctx: RunContext[HonchoAgentDeps], query: str) -> str:
"""Query Honcho's Dialectic API to recall facts about the current user.

Use this when the user asks what you remember about them or their past
conversations.

Args:
run_ctx: The run context exposing ``HonchoAgentDeps``.
query: Natural language question about the user.

Returns:
A natural language answer from Honcho's memory.
"""
ctx = run_ctx.deps.ctx
honcho = get_client()
peer = honcho.peer(ctx.user_id)
response = peer.chat(query=query)
return str(response) if response else "No relevant information found in memory."


async def chat(
user_id: str,
message: str,
session_id: str,
message_history: list[ModelMessage] | None = None,
) -> tuple[str, list[ModelMessage]]:
"""Run one conversation turn with persistent Honcho memory.

Pydantic AI's ``message_history`` parameter lets the agent maintain
in-session coherence across turns — it is separate from Honcho's
long-term cross-session memory.

Args:
user_id: Unique identifier for the user.
message: The user's input message.
session_id: Identifier for the current conversation session.
message_history: Prior messages for in-session coherence.

Returns:
Tuple of ``(response_text, updated_message_history)``.
"""
ctx = HonchoContext(user_id=user_id, session_id=session_id)
deps = HonchoAgentDeps(ctx=ctx)

save_memory(user_id, message, "user", session_id)

result = await honcho_agent.run(
message,
deps=deps,
message_history=message_history or [],
)
response = str(result.output)

save_memory(user_id, response, "assistant", session_id)

Comment thread
coderabbitai[bot] marked this conversation as resolved.
Outdated
return response, result.all_messages()


async def main() -> None:
print("Pydantic AI HonchoMemoryAgent — type 'quit' to exit\n")
user_id = "demo-user"
session_id = "demo-session"
message_history: list[ModelMessage] = []

while True:
user_input = input("You: ").strip()
if not user_input:
continue
if user_input.lower() in ("quit", "exit"):
break
response, message_history = await chat(
user_id, user_input, session_id, message_history
)
print(f"Agent: {response}\n")


if __name__ == "__main__":
asyncio.run(main())
14 changes: 14 additions & 0 deletions examples/pydantic-ai/python/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[project]
name = "honcho-pydantic-ai-example"
version = "1.0.0"
description = "Pydantic AI integration with Honcho for persistent memory"
requires-python = ">=3.10"
dependencies = [
"pydantic-ai>=0.0.14",
"honcho-ai>=2.0.0",
"python-dotenv>=1.0.0",
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
Empty file.
50 changes: 50 additions & 0 deletions examples/pydantic-ai/python/tools/client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
"""Honcho client initialization and context for Pydantic AI integration."""

from __future__ import annotations

import os
from dataclasses import dataclass, field

from dotenv import load_dotenv
from honcho import Honcho

load_dotenv()


@dataclass
class HonchoContext:
"""Holds Honcho identity for a single conversation turn.

Attributes:
user_id: Unique identifier for the human peer.
session_id: Identifier for the current conversation session.
assistant_id: Peer ID for the assistant. Defaults to ``"assistant"``.
"""

user_id: str
session_id: str
assistant_id: str = field(default="assistant")


def get_client(workspace_id: str | None = None) -> Honcho:
"""Initialize and return a Honcho client.

Args:
workspace_id: Optional workspace ID override. Falls back to
``HONCHO_WORKSPACE_ID`` env var, then to ``"default"``.

Returns:
Configured Honcho client instance.

Raises:
ValueError: If ``HONCHO_API_KEY`` is not set.
"""
api_key = os.getenv("HONCHO_API_KEY")
if not api_key:
raise ValueError(
"HONCHO_API_KEY is required. Set it in your environment or .env file."
)

env_workspace = os.getenv("HONCHO_WORKSPACE_ID")
resolved_workspace = workspace_id or env_workspace or "default"
return Honcho(api_key=api_key, workspace_id=resolved_workspace)
30 changes: 30 additions & 0 deletions examples/pydantic-ai/python/tools/get_context.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
"""Retrieve Honcho conversation context formatted for LLM injection."""

from __future__ import annotations

from .client import HonchoContext, get_client


def get_context(
ctx: HonchoContext,
tokens: int = 2000,
) -> list[dict[str, str]]:
"""Retrieve conversation context ready for injection into an LLM prompt.

Args:
ctx: ``HonchoContext`` holding the user, session, and assistant IDs.
tokens: Maximum number of tokens to include. Defaults to ``2000``.

Returns:
A list of message dicts: ``[{"role": "user" | "assistant", "content": "..."}]``.
Returns an empty list if the session has no messages yet.
"""
honcho = get_client()
user_peer = honcho.peer(ctx.user_id)
assistant_peer = honcho.peer(ctx.assistant_id)
session = honcho.session(ctx.session_id)

session.add_peers([user_peer, assistant_peer])

context = session.context(tokens=tokens)
Comment thread
coderabbitai[bot] marked this conversation as resolved.
return context.to_openai(assistant=ctx.assistant_id)
Loading