-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Python: Fix the Responses agent msg chaining with reasoning models #12907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: Fix the Responses agent msg chaining with reasoning models #12907
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Do we want to add a sample how to use Responses agent with reasoning model?
Good question. A community member is working on adding the reasoning summaries. Once we have those, then yes, we should have a sample that reflects how to configure these summaries, and other reasoning-specific config. Right now, the Responses agent with a reasoning model isn't much different than a model like |
…mantic-kernel into fix-responses-agent-chaining
Motivation and Context
When running a reasoning model with the OpenAI responses agent, it was producing 400s due to the way we were either including or not including FCC related to tool calls. This PR fixes it for both the default (store_enabled=True) and the case where uses don't want to use OpenAI's previous response id as a pseudo-thread (store_enabled=False).
Description
Contribution Checklist