chore(release): release version 1.34.2 (patch)#9237
Conversation
goose Release Manual Testing ChecklistVersion: 1.34.2Identify the high risk changes in this ReleaseIt will generate an analysis report in Regression TestingMake a copy of this document for each version and check off as steps are verified. Provider Testing
Starting ConversationsTest various ways to start a conversation:
RecipesCreate Recipe from Session
Use Existing Recipe
Recipe Management
Recipe from File
recipe:
title: test recipe again
description: testing recipe again
instructions: The value of test_param is {{test_param}}
prompt: What is the value of test_param?
parameters:
- key: test_param
input_type: string
requirement: required
description: Enter value for test_param
ExtensionsManual Extension Addition
Playwright Extension
Extension with Environment Variables
Speech-to-Text (Local Model)
Settings
Follow-up IssuesLink any GitHub issues filed during testing: Tested by: _____ |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a76f37f4f2
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| "reasoning": false, | ||
| "tool_call": false, |
There was a problem hiding this comment.
Restore GPT-5.1 canonical capabilities
For nano-gpt/openai/gpt-5.1, this release flips reasoning and tool_call to false and also changes the nearby limits to 1,000,000/32,768, but the canonical OpenAI GPT-5.1 entry and OpenAI's model docs list reasoning/tool support with a 400,000 context window and 128,000 max output tokens (https://platform.openai.com/docs/models/gpt-5.1/). These fields feed provider templates directly via get_provider_template, and the limits are consumed by ModelConfig::with_canonical_limits, so NanoGPT users selecting this catalog model will see incorrect capabilities and be capped to the wrong output budget.
Useful? React with 👍 / 👎.
Release v1.34.2
Test before Release
GITHUB_TOKEN(docs).How to Release
Push the release tag to trigger the release:
git fetch && git tag v1.34.2 origin/release/1.34.2 git push origin v1.34.2The tag push will trigger the release build. This PR will be automatically closed.
Cherry-Picks
If you need to include additional fixes, cherry-pick them into the
release/1.34.2branch before tagging.Important Notes
mainChanges in This Release
Comparing:
v1.34.1...v1.34.2This release PR was generated automatically.