This document captures the canonical response contract that every Foundry MCP operation must follow. It complements the completed specification specs/completed/response-schema-standardization-2025-11-26-001.json, the shared helpers in src/foundry_mcp/core/responses.py, and the best-practice guidance in dev_docs/mcp_best_practices.
All tool responses must serialize to the following structure ("response-v2"):
{
"success": true,
"data": { ... },
"error": null,
"meta": {
"version": "response-v2",
"request_id": "req_abc123"?,
"warnings": ["..."]?,
"warning_details": [{ "code": "...", "severity": "...", "message": "..." }]?,
"pagination": { ... }?,
"rate_limit": { ... }?,
"telemetry": { ... }?,
"content_fidelity": "full" | "partial" | "summary" | "reference_only"?,
"content_fidelity_schema_version": "1.0"?,
"dropped_content_ids": ["..."]?,
"content_archive_hashes": { ... }?
}
}success(bool): Indicates whether the operation completed successfully.data(object): Operation payload. When no payload exists, send{}. For errors, may contain structured error context (see Error Response Fields).error(string | null): Populated only whensuccessisfalse. Human-readable error description.meta(object): Always include{"version": "response-v2"}and attach optional metadata using the reserved keys listed below.
| Key | Required | Description |
|---|---|---|
version |
YES | Identifies the response schema version (response-v2). |
request_id |
SHOULD | Correlation identifier propagated through logs/traces. |
warnings |
SHOULD | Non-fatal issues for successful operations (array of strings). |
pagination |
MAY | Cursor-based pagination object containing cursor, has_more, total_count, etc. |
rate_limit |
MAY | Remaining quota, reset time, and retry hints when throttling occurs. |
telemetry |
MAY | Timing/performance metrics such as duration_ms or downstream call counts. |
content_fidelity |
MAY | Content fidelity level indicating completeness of the response (see Content Fidelity Metadata). |
content_fidelity_schema_version |
MAY | Schema version for content fidelity metadata (e.g., "1.0"). |
dropped_content_ids |
MAY | Array of content identifiers that were dropped due to size constraints. |
content_archive_hashes |
MAY | Object mapping archive identifiers to content hashes for retrieval. |
warning_details |
MAY | Structured warning objects with severity and context (see Warning Details). |
Do not invent new top-level keys under
datato convey metadata. Attach operational context throughmetaso every tool shares the same envelope semantics.
| Scenario | Required Behavior |
|---|---|
| Empty but successful query | success: true, include empty arrays/counts in data, error: null. |
| Missing resource / invalid input | success: false, data contains structured error info, descriptive error string. |
| Blocked or partial work | success: true, describe state inside data, add meta.warnings if applicable. |
| Multi-payload operations | Nest each payload under a named key inside data (e.g., { "spec": {...}, "tasks": [...] }). |
When responses may be truncated, summarized, or have content dropped due to token limits or size constraints, include content fidelity metadata in meta to inform consumers about response completeness.
{
"meta": {
"version": "response-v2",
"content_fidelity_schema_version": "1.0",
"content_fidelity": "full" | "partial" | "summary" | "reference_only",
"dropped_content_ids": ["finding-003", "source-015"],
"content_archive_hashes": {
"archive-001": "sha256:abc123..."
}
}
}| Field | Type | Required | Description |
|---|---|---|---|
content_fidelity_schema_version |
string | SHOULD (when fidelity < full) | Schema version for content fidelity metadata. Current version: "1.0". |
content_fidelity |
string | SHOULD (when fidelity < full) | Level of content completeness in the response. |
dropped_content_ids |
array<string> | MAY | Identifiers of content items that were omitted. Enables targeted retrieval. |
content_archive_hashes |
object | MAY | Map of archive IDs to content hashes for retrieving dropped content. |
| Level | Description | Use Case |
|---|---|---|
full |
Complete response with all content included | Default when no truncation occurs |
partial |
Some content omitted but structure preserved | Large responses exceeding soft limits |
summary |
Condensed representation of full content | Token-constrained contexts |
reference_only |
Only identifiers/references, no content bodies | Extreme token constraints |
Response with partial fidelity due to dropped findings:
{
"success": true,
"data": {
"research_id": "research-001",
"findings": [
{"id": "finding-001", "title": "Primary result", "content": "..."},
{"id": "finding-002", "title": "Secondary result", "content": "..."}
],
"total_findings": 5
},
"error": null,
"meta": {
"version": "response-v2",
"content_fidelity_schema_version": "1.0",
"content_fidelity": "partial",
"dropped_content_ids": ["finding-003", "finding-004", "finding-005"],
"content_archive_hashes": {
"findings-archive": "sha256:e3b0c44298fc1c149afbf4c8996fb924..."
},
"warnings": ["3 findings omitted due to token limits"]
}
}When dropped_content_ids is present, consumers can retrieve omitted content:
- Check
dropped_content_idsfor missing item identifiers - Use
content_archive_hashesto verify archive availability - Call appropriate retrieval endpoint with the archive hash or content IDs
For structured warnings with severity and context beyond simple string messages, use warning_details alongside or instead of the warnings array.
{
"meta": {
"version": "response-v2",
"warnings": ["3 findings omitted due to token limits"],
"warning_details": [
{
"code": "CONTENT_TRUNCATED",
"severity": "info",
"message": "3 findings omitted due to token limits",
"context": {
"dropped_count": 3,
"total_count": 5,
"reason": "token_limit_exceeded"
}
}
]
}
}| Field | Type | Required | Description |
|---|---|---|---|
code |
string | SHOULD | Machine-readable warning classification (e.g., CONTENT_TRUNCATED, STALE_CACHE). |
severity |
string | SHOULD | Warning severity level: info, warning, error. |
message |
string | YES | Human-readable warning description. |
context |
object | MAY | Additional context specific to the warning type. |
| Severity | Description | Consumer Action |
|---|---|---|
info |
Informational, no action needed | Log/display as appropriate |
warning |
Potential issue, consider action | Evaluate context and decide |
error |
Significant issue, action recommended | Address before proceeding |
| Code | Severity | Description |
|---|---|---|
CONTENT_TRUNCATED |
info | Response content was truncated due to size limits |
STALE_CACHE |
warning | Cached data may be outdated |
PARTIAL_FAILURE |
warning | Some sub-operations failed but overall succeeded |
DEPRECATED_FIELD |
info | Response includes deprecated fields |
RATE_LIMIT_APPROACHING |
warning | Approaching rate limit threshold |
FALLBACK_USED |
info | Primary source unavailable, fallback used |
{
"success": true,
"data": {
"results": [...]
},
"error": null,
"meta": {
"version": "response-v2",
"warnings": [
"3 sources failed to respond",
"Cache data is 2 hours old"
],
"warning_details": [
{
"code": "PARTIAL_FAILURE",
"severity": "warning",
"message": "3 sources failed to respond",
"context": {
"failed_sources": ["source-a", "source-b", "source-c"],
"successful_sources": 7,
"total_sources": 10
}
},
{
"code": "STALE_CACHE",
"severity": "warning",
"message": "Cache data is 2 hours old",
"context": {
"cache_age_seconds": 7200,
"max_freshness_seconds": 3600
}
}
]
}
}When success is false, the data object should contain structured error context to enable machine-readable error handling. The error_response helper automatically populates these fields.
| Field | Required | Type | Description |
|---|---|---|---|
error_code |
SHOULD | string | Machine-readable error classification (e.g., VALIDATION_ERROR, NOT_FOUND, RATE_LIMIT_EXCEEDED). Use SCREAMING_SNAKE_CASE. |
error_type |
SHOULD | string | Error category for routing/handling (e.g., validation, authorization, not_found, internal). |
remediation |
SHOULD | string | Actionable guidance for resolving the error. |
details |
MAY | object | Nested structure with field-specific or context-specific error info. |
| error_type | HTTP Analog | Description | Retry? |
|---|---|---|---|
validation |
400 | Invalid input data | No, fix input |
authentication |
401 | Invalid or missing credentials | No, re-authenticate |
authorization |
403 | Insufficient permissions | No |
not_found |
404 | Requested resource doesn't exist | No |
conflict |
409 | State conflict (e.g., duplicate) | Maybe, check state |
rate_limit |
429 | Too many requests | Yes, after delay |
feature_flag |
403 | Feature not enabled for client | No, check flag status |
internal |
500 | Server-side error | Yes, with backoff |
unavailable |
503 | Service temporarily unavailable | Yes, with backoff |
{
"success": false,
"data": {
"error_code": "VALIDATION_ERROR",
"error_type": "validation",
"remediation": "Provide a non-empty spec_id parameter",
"details": {
"field": "spec_id",
"constraint": "required",
"received": null
}
},
"error": "Validation failed: spec_id is required",
"meta": {
"version": "response-v2",
"request_id": "req_abc123"
}
}| Error Code | error_type | When to Use |
|---|---|---|
VALIDATION_ERROR |
validation | Generic input validation failure |
INVALID_FORMAT |
validation | Malformed input (wrong type, bad JSON) |
MISSING_REQUIRED |
validation | Required field not provided |
NOT_FOUND |
not_found | Resource doesn't exist |
SPEC_NOT_FOUND |
not_found | Specification file not found |
TASK_NOT_FOUND |
not_found | Task ID not found in spec |
DUPLICATE_ENTRY |
conflict | Resource already exists |
CONFLICT |
conflict | State conflict or invalid transition |
UNAUTHORIZED |
authentication | Invalid or missing credentials |
FORBIDDEN |
authorization | Insufficient permissions |
FEATURE_DISABLED |
feature_flag | Feature flag not enabled |
RATE_LIMIT_EXCEEDED |
rate_limit | Too many requests |
INTERNAL_ERROR |
internal | Unexpected server error |
UNAVAILABLE |
unavailable | Service temporarily unavailable |
Always leverage the shared helpers and dataclasses to produce responses:
from dataclasses import asdict
from foundry_mcp.core.responses import success_response, error_response
@mcp.tool()
def tool_example(...) -> dict:
payload = compute_payload(...)
return asdict(success_response(
data={"result": payload},
warnings=payload.warnings,
pagination=payload.pagination,
request_id=context.request_id,
))For failures, use error_response() to include machine-readable context:
return asdict(error_response(
message="Validation failed: spec_id is required",
error_code="MISSING_REQUIRED",
error_type="validation",
remediation="Provide a non-empty spec_id parameter",
details={"field": "spec_id", "constraint": "required"},
request_id=context.request_id,
))error_response() Parameters:
| Parameter | Required | Description |
|---|---|---|
message |
YES | Human-readable error description (populates error field) |
error_code |
SHOULD | Machine-readable code (e.g., VALIDATION_ERROR) |
error_type |
SHOULD | Error category (e.g., validation, not_found) |
remediation |
SHOULD | Actionable guidance for resolving the error |
data |
MAY | Additional context to merge into data object |
details |
MAY | Nested error details (field info, constraints) |
request_id |
SHOULD | Correlation ID for tracing |
rate_limit |
MAY | Rate limit state when applicable |
telemetry |
MAY | Timing/performance data captured before failure |
meta |
MAY | Additional metadata to merge into meta object |
Example: Not Found Error
return asdict(error_response(
message=f"Spec '{spec_id}' not found",
error_code="SPEC_NOT_FOUND",
error_type="not_found",
remediation="Verify the spec ID exists using spec(action=\"list\")",
request_id=context.request_id,
))Example: Feature Flag Disabled
return asdict(error_response(
message=f"Feature '{flag_name}' is not enabled",
error_code="FEATURE_DISABLED",
error_type="feature_flag",
data={"feature": flag_name},
remediation="Contact support to enable this feature or check feature flag configuration",
))Example: Rate Limit Exceeded
return asdict(error_response(
message="Rate limit exceeded: 100 requests per minute",
error_code="RATE_LIMIT_EXCEEDED",
error_type="rate_limit",
data={"retry_after_seconds": 45},
remediation="Wait 45 seconds before retrying. Consider batching requests.",
rate_limit={"limit": 100, "remaining": 0, "reset_at": reset_timestamp},
))These helpers guarantee meta.version is present and prevent ad-hoc response shapes. Avoid constructing dicts manually.
- Import helpers (
success_response/error_response) in every tool module. - Return
asdict(...)so dataclasses serialize with the standardized keys. - Keep
datapayloads business-focused; put operational context inmeta. - Document any additional
metasemantics (new pagination fields, telemetry) in tool specs. - Record deviations or streaming quirks in specs to prevent regressions.
- Unit enforcement lives in
tests/test_responses.py; extend it when updating helpers. - Integration tests such as
tests/integration/test_mcp_tools.pyshould assert the envelope for new tools. - Fixtures and parity harnesses must verify
meta.version == "response-v2"and any declared metadata keys (warnings,pagination, etc.).
- The
response_contract_v2feature flag governs client opt-in. During migrations, continue returning v1 only when explicitly required and document timelines in the spec. - Feature-flag lifecycles must follow dev_docs/mcp_best_practices/14-feature-flags.md, and metadata such as rate limits should align with dev_docs/mcp_best_practices/02-envelopes-metadata.md.
- Telemetry counters in
foundry_mcp/server.pyrely on consistent envelopes; avoid bypassing the helpers or mutating the serialized dict afterward.
The DigestPayload schema defines the structure for compressed document content in deep research workflows. When a source is digested, its content field contains a JSON-serialized DigestPayload.
Detect digested sources via the content type:
if source.content_type == "digest/v1":
payload = DigestPayload.from_json(source.content){
"version": "1.0",
"content_type": "digest/v1",
"query_hash": "ab12cd34",
"summary": "Condensed summary of the source content...",
"key_points": [
"First key insight extracted from the document",
"Second key insight with supporting detail"
],
"evidence_snippets": [
{
"text": "Exact quote from the source document...",
"locator": "char:1500-1650",
"relevance_score": 0.85
}
],
"original_chars": 25000,
"digest_chars": 2500,
"compression_ratio": 0.10,
"source_text_hash": "sha256:abc123def456..."
}| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
version |
string | YES | Default: "1.0" |
Schema version identifier |
content_type |
string | YES | Default: "digest/v1" |
Self-describing type for detection |
query_hash |
string | YES | Exactly 8 lowercase hex chars, pattern ^[a-f0-9]{8}$ |
Hash of the research query for cache keying |
summary |
string | YES | Max 2000 chars | Condensed summary of source content |
key_points |
array<string> | YES | Max 10 items, each max 500 chars | Extracted key insights |
evidence_snippets |
array<EvidenceSnippet> | YES | Max 10 items | Query-relevant excerpts with locators |
original_chars |
int | YES | ≥0 | Character count of original source |
digest_chars |
int | YES | ≥0 | Character count of digest output |
compression_ratio |
float | YES | 0.0 to 1.0 | Ratio of digest_chars to original_chars |
source_text_hash |
string | YES | Pattern ^sha256:[a-f0-9]{64}$ |
SHA256 hash of canonical text |
{
"text": "Exact substring from the canonical source text...",
"locator": "char:1500-1650",
"relevance_score": 0.85
}| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
text |
string | YES | Max 500 chars | Exact substring from canonical text |
locator |
string | YES | See locator formats below | Position reference for citation |
relevance_score |
float | YES | 0.0 to 1.0 | Query relevance score |
Locators reference positions in the canonical (normalized) source text:
| Format | Example | Description |
|---|---|---|
| Text/HTML | char:1500-1800 |
Characters 1500-1799 (exclusive end) |
page:3:char:200-450 |
Page 3, characters 200-449 |
Locator Semantics:
- Start and end positions are 0-based character indices
- End boundary is exclusive (Python slice convention)
- Page numbers are 1-based (human-readable)
- Offsets reference canonical text (post-normalization)
Verification:
# Locators can be verified against archived content
canonical_text[start:end] == snippet.textWhen processing sources that may contain digests:
- Detect via
source.content_type == "digest/v1" - Parse
source.contentas JSON, validate against schema - Skip further summarization (content is already compressed)
- Use
evidence_snippetsfor citations with locators - Use
digest_charsfor token budget estimation (notoriginal_chars)
from foundry_mcp.core.research.models import DigestPayload
def process_source(source):
if source.content_type == "digest/v1":
# Parse digest payload
payload = DigestPayload.from_json(source.content)
# Use summary for context (already compressed)
context = payload.summary
# Use key points for highlights
for point in payload.key_points:
print(f"• {point}")
# Use evidence snippets for citations
for ev in payload.evidence_snippets:
print(f'"{ev.text}" [{ev.locator}]')
# Token estimation uses digest size
estimated_tokens = payload.digest_chars // 4
# IMPORTANT: Do NOT re-summarize digested content
return context
else:
# Process raw content normally
return source.contentUse the provided helpers for consistent serialization:
from foundry_mcp.core.research.document_digest import (
serialize_payload,
deserialize_payload,
validate_payload_dict,
)
# Serialize to JSON string
json_str = serialize_payload(payload)
# Deserialize from JSON string
payload = deserialize_payload(json_str)
# Validate dict (e.g., from YAML or manual construction)
payload = validate_payload_dict(data_dict)Invalid payloads raise pydantic.ValidationError with descriptive messages:
| Error | Cause |
|---|---|
query_hash: String should match pattern '^[a-f0-9]{8}$' |
Invalid query hash format |
summary: String should have at most 2000 characters |
Summary too long |
key_points[N]: exceeds maximum length of 500 characters |
Key point too long |
relevance_score: Input should be less than or equal to 1 |
Score out of range |
source_text_hash: String should match pattern '^sha256:[a-f0-9]{64}$' |
Invalid hash format |
- Deep Research Guide: dev_docs/guides/deep-research.md
- Configuration Reference: dev_docs/configuration.md
- Models:
src/foundry_mcp/core/research/models.py
- Spec:
response-schema-standardization-2025-11-26-001 - Helpers:
src/foundry_mcp/core/responses.py - Testing/fixtures: dev_docs/mcp_best_practices/10-testing-fixtures.md
- Envelopes & metadata guidance: dev_docs/mcp_best_practices/02-envelopes-metadata.md