-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Python: Introduce the OpenAI Responses Agent #11240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
8cdfe5e
responses api wip
moonbox3 0165a78
wip openai response agent
moonbox3 ea99c0a
Working streaming now. Vision working. Other samples working.
moonbox3 f45443a
wip
moonbox3 5473529
Remove old file
moonbox3 2fe9409
Fix mypy errors.
moonbox3 6287f9d
merge main to branch
moonbox3 f4b22b5
agent wip
moonbox3 2bcc060
wip
moonbox3 4fa402f
Merge main to branch
moonbox3 245ebfb
wip
moonbox3 5653d26
responses agent updates
moonbox3 e2367a4
Mypy updates
moonbox3 a2db8ad
Merge branch 'main' into openai-response-agent
moonbox3 6a68854
Add support for OpenAI Responses API as an Agent
moonbox3 833a2a8
Merge branch 'main' into openai-response-agent
moonbox3 9f1413a
PR feedback
moonbox3 c6eec9f
Add responses thread actions tests.
moonbox3 aa2185a
Updates and cleanup
moonbox3 16274e7
Merge branch 'main' into openai-response-agent
moonbox3 706e17e
Handle yielding content via callback
moonbox3 5cb03e1
Merge branch 'openai-response-agent' of github.com:moonbox3/semantic-…
moonbox3 45f41d4
Add sample output
moonbox3 ae8e2d9
Update README
moonbox3 99f2bbb
PR feedback
moonbox3 0fe4100
Improve non-complete polling so we don't poll forever
moonbox3 f98b601
Fix mypy errors
moonbox3 7c7adf6
Merge branch 'main' into openai-response-agent
moonbox3 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
94 changes: 94 additions & 0 deletions
94
python/samples/concepts/agents/openai_responses/responses_agent_file_search_streaming.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
# Copyright (c) Microsoft. All rights reserved. | ||
import asyncio | ||
import os | ||
|
||
from semantic_kernel.agents import OpenAIResponsesAgent | ||
from semantic_kernel.contents.streaming_chat_message_content import StreamingChatMessageContent | ||
|
||
""" | ||
The following sample demonstrates how to create an OpenAI Responses Agent. | ||
The sample shows how to have the agent answer questions about the provided | ||
document with streaming responses. | ||
|
||
The interaction with the agent is via the `get_response` method, which sends a | ||
user input to the agent and receives a response from the agent. The conversation | ||
history is maintained by the agent service, i.e. the responses are automatically | ||
associated with the thread. Therefore, client code does not need to maintain the | ||
conversation history. | ||
""" | ||
|
||
|
||
# Simulate a conversation with the agent | ||
USER_INPUTS = [ | ||
"By birthday, who is the youngest employee?", | ||
"Who works in sales?", | ||
"I have a customer request, who can help me?", | ||
] | ||
|
||
|
||
async def main(): | ||
# 1. Create the client using OpenAI resources and configuration | ||
client, model = OpenAIResponsesAgent.setup_resources() | ||
|
||
pdf_file_path = os.path.join( | ||
os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "resources", "employees.pdf" | ||
) | ||
|
||
with open(pdf_file_path, "rb") as file: | ||
file = await client.files.create(file=file, purpose="assistants") | ||
|
||
vector_store = await client.vector_stores.create( | ||
name="step4_assistant_file_search", | ||
file_ids=[file.id], | ||
) | ||
|
||
file_search_tool = OpenAIResponsesAgent.configure_file_search_tool(vector_store.id) | ||
|
||
# 2. Create a Semantic Kernel agent for the OpenAI Responses API | ||
agent = OpenAIResponsesAgent( | ||
ai_model_id=model, | ||
client=client, | ||
instructions="Find answers to the user's questions in the provided file.", | ||
name="FileSearch", | ||
tools=[file_search_tool], | ||
) | ||
|
||
# 3. Create a thread for the agent | ||
# If no thread is provided, a new thread will be | ||
# created and returned with the initial response | ||
thread = None | ||
|
||
response_chunks: list[StreamingChatMessageContent] = [] | ||
for user_input in USER_INPUTS: | ||
print(f"# User: '{user_input}'") | ||
# 4. Invoke the agent for the current message and print the response | ||
first_chunk = True | ||
async for response in agent.invoke_stream(messages=user_input, thread=thread): | ||
thread = response.thread | ||
response_chunks.append(response) | ||
if first_chunk: | ||
print(f"# {response.name}: ", end="", flush=True) | ||
first_chunk = False | ||
print(response.content, end="", flush=True) | ||
print() | ||
|
||
""" | ||
# User: 'By birthday, who is the youngest employee?' | ||
# Agent: The youngest employee by birthday is Teodor Britton, born on January 9, 1997. | ||
# User: 'Who works in sales?' | ||
# Agent: The employees who work in sales are: | ||
|
||
- Mariam Jaslyn, Sales Representative | ||
- Hicran Bea, Sales Manager | ||
- Angelino Embla, Sales Representative. | ||
# User: 'I have a customer request, who can help me?' | ||
# Agent: For a customer request, you could reach out to the following people in the sales department: | ||
|
||
- Mariam Jaslyn, Sales Representative | ||
- Hicran Bea, Sales Manager | ||
- Angelino Embla, Sales Representative. | ||
""" | ||
|
||
|
||
if __name__ == "__main__": | ||
asyncio.run(main()) |
130 changes: 130 additions & 0 deletions
130
python/samples/concepts/agents/openai_responses/responses_agent_message_callback.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
# Copyright (c) Microsoft. All rights reserved. | ||
import asyncio | ||
from typing import Annotated | ||
|
||
from semantic_kernel.agents import AzureResponsesAgent | ||
from semantic_kernel.contents import AuthorRole, FunctionCallContent, FunctionResultContent | ||
from semantic_kernel.contents.chat_message_content import ChatMessageContent | ||
from semantic_kernel.functions import kernel_function | ||
|
||
""" | ||
The following sample demonstrates how to create an OpenAI | ||
Responses Agent using either Azure OpenAI or OpenAI. The | ||
Responses Agent allow for function calling, the use of file search and a | ||
web search tool. Responses Agent Threads are used to manage the | ||
conversation state, similar to a Semantic Kernel Chat History. | ||
Additionally, the invoke configures a message callback | ||
to receive the conversation messages during invocation. | ||
""" | ||
|
||
|
||
# Define a sample plugin for the sample | ||
class MenuPlugin: | ||
"""A sample Menu Plugin used for the concept sample.""" | ||
|
||
@kernel_function(description="Provides a list of specials from the menu.") | ||
def get_specials(self) -> Annotated[str, "Returns the specials from the menu."]: | ||
return """ | ||
Special Soup: Clam Chowder | ||
Special Salad: Cobb Salad | ||
Special Drink: Chai Tea | ||
""" | ||
|
||
@kernel_function(description="Provides the price of the requested menu item.") | ||
def get_item_price( | ||
self, menu_item: Annotated[str, "The name of the menu item."] | ||
) -> Annotated[str, "Returns the price of the menu item."]: | ||
return "$9.99" | ||
|
||
|
||
intermediate_steps: list[ChatMessageContent] = [] | ||
|
||
|
||
async def handle_intermediate_steps(message: ChatMessageContent) -> None: | ||
intermediate_steps.append(message) | ||
|
||
|
||
async def main(): | ||
# 1. Create the client using Azure OpenAI resources and configuration | ||
client, model = AzureResponsesAgent.setup_resources() | ||
|
||
# 2. Create a Semantic Kernel agent for the OpenAI Responses API | ||
agent = AzureResponsesAgent( | ||
ai_model_id=model, | ||
client=client, | ||
name="Host", | ||
instructions="Answer questions about the menu.", | ||
plugins=[MenuPlugin()], | ||
) | ||
|
||
# 3. Create a thread for the agent | ||
# If no thread is provided, a new thread will be | ||
# created and returned with the initial response | ||
thread = None | ||
|
||
user_inputs = ["Hello", "What is the special soup?", "What is the special drink?", "How much is that?", "Thank you"] | ||
|
||
try: | ||
for user_input in user_inputs: | ||
print(f"# {AuthorRole.USER}: '{user_input}'") | ||
async for response in agent.invoke( | ||
messages=user_input, | ||
thread=thread, | ||
on_intermediate_message=handle_intermediate_steps, | ||
): | ||
thread = response.thread | ||
print(f"# {response.name}: {response.content}") | ||
finally: | ||
await thread.delete() if thread else None | ||
|
||
# Print the final chat history | ||
print("\nIntermediate Steps:") | ||
for msg in intermediate_steps: | ||
if any(isinstance(item, FunctionResultContent) for item in msg.items): | ||
for fr in msg.items: | ||
if isinstance(fr, FunctionResultContent): | ||
print(f"Function Result:> {fr.result} for function: {fr.name}") | ||
elif any(isinstance(item, FunctionCallContent) for item in msg.items): | ||
for fcc in msg.items: | ||
if isinstance(fcc, FunctionCallContent): | ||
print(f"Function Call:> {fcc.name} with arguments: {fcc.arguments}") | ||
else: | ||
print(f"{msg.role}: {msg.content}") | ||
|
||
""" | ||
Sample Output: | ||
|
||
# AuthorRole.USER: 'Hello' | ||
# Host: Hi there! How can I assist you with the menu today? | ||
# AuthorRole.USER: 'What is the special soup?' | ||
# Host: The special soup is Clam Chowder. | ||
# AuthorRole.USER: 'What is the special drink?' | ||
# Host: The special drink is Chai Tea. | ||
# AuthorRole.USER: 'How much is that?' | ||
# Host: Could you please specify the menu item you are asking about? | ||
# AuthorRole.USER: 'Thank you' | ||
# Host: You're welcome! If you have any questions about the menu or need assistance, feel free to ask. | ||
|
||
Intermediate Steps: | ||
AuthorRole.ASSISTANT: Hi there! How can I assist you with the menu today? | ||
AuthorRole.ASSISTANT: | ||
Function Result:> | ||
Special Soup: Clam Chowder | ||
Special Salad: Cobb Salad | ||
Special Drink: Chai Tea | ||
for function: MenuPlugin-get_specials | ||
AuthorRole.ASSISTANT: The special soup is Clam Chowder. | ||
AuthorRole.ASSISTANT: | ||
Function Result:> | ||
Special Soup: Clam Chowder | ||
Special Salad: Cobb Salad | ||
Special Drink: Chai Tea | ||
for function: MenuPlugin-get_specials | ||
AuthorRole.ASSISTANT: The special drink is Chai Tea. | ||
AuthorRole.ASSISTANT: Could you please specify the menu item you are asking about? | ||
AuthorRole.ASSISTANT: You're welcome! If you have any questions about the menu or need assistance, feel free to ask. | ||
""" | ||
|
||
|
||
if __name__ == "__main__": | ||
asyncio.run(main()) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.