Skip to content

fix: strip orphaned fc_ IDs from function_calls when reasoning is pruned from context#12113

Open
octo-patch wants to merge 2 commits intocontinuedev:mainfrom
octo-patch:fix/issue-12056-strip-orphaned-fc-ids-after-context-compaction
Open

fix: strip orphaned fc_ IDs from function_calls when reasoning is pruned from context#12113
octo-patch wants to merge 2 commits intocontinuedev:mainfrom
octo-patch:fix/issue-12056-strip-orphaned-fc-ids-after-context-compaction

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

@octo-patch octo-patch commented Apr 12, 2026

Fixes #12056

Problem

When compileChatMessages prunes a thinking message from the conversation history due to context window overflow, the associated assistant message still carries fc_* IDs in its metadata (referencing the now-absent rs_* reasoning item).

The OpenAI Responses API then rejects the next request with a 400 error:

Item 'fc_...' of type 'function_call' was provided without its
required 'reasoning' item: 'rs_...'

This affects GPT-5, o3, and other reasoning models that use the Responses API when the conversation history grows long enough to trigger context compaction.

Root Cause

sanitizeResponsesInput already handles the case where a reasoning item exists in the input but lacks encrypted_content — it removes the reasoning and strips fc_* IDs from subsequent function_call items.

However, it did not handle the case where the reasoning item was never added to the input at all (because compileChatMessages removed the thinking message before the messages reached toResponsesInput). In this case, the function_call items still had their fc_* IDs, causing the API rejection.

Solution

Added a second pass in sanitizeResponsesInput that scans backward from each function_call with a fc_* ID to find whether a kept reasoning item exists in the same turn block. If no reasoning is found (either it was removed in the first pass or was never in the input), the fc_* ID is stripped from the function_call so the API does not look for the missing reasoning item.

The backward scan correctly:

  • Skips over other function_call items in the same block (parallel tool calls)
  • Skips over already-removed items from the first pass
  • Stops at any non-reasoning/non-function_call item (turn boundary)

Testing

Added test cases covering:

  • Single pruned function_call with orphaned fc_* ID
  • Multiple pruned function_calls with orphaned fc_* IDs
  • Sanity check that valid fc_* IDs (with preceding reasoning) are preserved

Summary by cubic

Prevents Responses API 400s by stripping orphaned fc_* IDs after context compaction, and fixes slash command filtering in edit mode by returning slashCommandSource.

  • Bug Fixes

    • Add a second pass in sanitizeResponsesInput to remove fc_* IDs from function_call items when no kept reasoning appears earlier in the same turn; the backward scan skips parallel calls and removed items, and stops at turn boundaries.
    • Rename source to slashCommandSource in selectSlashCommandComboBoxInputs so prompt files show up in edit mode.
  • Tests

    • Add cases for single/multiple orphaned fc_* IDs and a sanity check preserving valid IDs.

Written for commit 4daceca. Summary will update on new commits.

octo-patch added 2 commits April 11, 2026 11:40
…le prompt files in edit mode (fixes continuedev#12087)

The selector `selectSlashCommandComboBoxInputs` was returning the slash
command source as `source`, but `ContinueInputBox` filters slash commands
in edit mode by checking `cmd.slashCommandSource`. This property name
mismatch caused all slash commands (including prompt files) to be filtered
out in edit mode, since `cmd.slashCommandSource` was always `undefined`.

Rename `source` to `slashCommandSource` in the selector return value to
match the `ComboBoxItem` type definition and the edit-mode filter logic.
…ion (fixes continuedev#12056)

When compileChatMessages prunes a thinking message due to context overflow,
the associated assistant message still carries fc_* IDs that reference the
now-absent reasoning item (rs_*). The OpenAI Responses API then rejects the
request with a 400 error:

  "Item 'fc_...' of type 'function_call' was provided without its
   required 'reasoning' item: 'rs_...'"

Fix: add a second pass in sanitizeResponsesInput that scans backward from
each function_call with a fc_* ID to find whether a kept reasoning item
exists in the same turn block. If no reasoning is found, the fc_ ID is
stripped from the function_call so the API does not look for the missing
reasoning item.

Also adds test cases covering the pruned-reasoning scenario.
@octo-patch octo-patch requested a review from a team as a code owner April 12, 2026 03:39
@octo-patch octo-patch requested review from sestinj and removed request for a team April 12, 2026 03:40
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Apr 12, 2026
@github-actions
Copy link
Copy Markdown
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


octo-patch seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

Error: GPT-5 - 400

1 participant