fix(beta): persist OpenAI server-side builtin tool calls in history#2712
Closed
vvlrff wants to merge 2 commits intoag2ai:mainfrom
Closed
fix(beta): persist OpenAI server-side builtin tool calls in history#2712vvlrff wants to merge 2 commits intoag2ai:mainfrom
vvlrff wants to merge 2 commits intoag2ai:mainfrom
Conversation
Collaborator
Author
|
This pr can be considered after #2695 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
The OpenAI Responses API requires that reasoning items be paired with their server-side tool calls (web_search, code_interpreter, image_generation) when history is replayed back to the model. Otherwise the API rejects the request.
Current behaviour was broken for replay:
BuiltinToolCallEvent/BuiltinToolResultEvent— with no reference to the original SDK object. When rebuilding history for a follow-up request there was nothing to re-emit.ModelReasoning(text)— also with no SDK object attached.ResponseOutputItemAddedEvent, when the SDK object was not yet fully populated (e.g.outputsmissing oncode_interpreter).Net effect: a conversation that used any builtin server-side tool could not be fed back to the Responses API in a subsequent turn.
Solution
New provider-specific events
File: autogen/beta/config/openai/events.py
OpenAIServerToolCallEventBuiltinToolCallEventitem: ResponseFunctionWebSearch | ResponseCodeInterpreterToolCall | ImageGenerationCallOpenAIServerToolResultEventBuiltinToolResultEventOpenAIReasoningEventModelReasoningitem: ResponseReasoningItemEach event carries the original SDK object (
item) so the mapper can replay it verbatim.History replay
File: autogen/beta/config/openai/mappers.py
New branch in
events_to_responses_input:OpenAIServerToolResultEventis intentionally not re-emitted — the Responses API models a server-side tool as a single combined item, already covered by the paired call event.Client
File: autogen/beta/config/openai/openai_responses_client.py
_process_response(non-streaming):web_search/code_interpreter/image_generationnow emitOpenAIServerToolCallEvent(item=...)followed by an emptyOpenAIServerToolResultEvent.OpenAIReasoningEvent(text, item=...), wheretextjoins allsummary[].textwith\n\n.ToolResultpayloads — it all lives onitemnow._process_stream(streaming):ResponseOutputItemAddedEventtoResponseOutputItemDoneEvent, so the SDK object is fully populated (on Added,code_interpreter.outputsis empty).provider_dataonBuiltinToolCallEvent— this info is onitemnow.ResponseOutputItemAddedEventimport.