Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
8cdfe5e
responses api wip
moonbox3 Mar 18, 2025
0165a78
wip openai response agent
moonbox3 Mar 19, 2025
ea99c0a
Working streaming now. Vision working. Other samples working.
moonbox3 Mar 19, 2025
f45443a
wip
moonbox3 Mar 20, 2025
5473529
Remove old file
moonbox3 Mar 20, 2025
2fe9409
Fix mypy errors.
moonbox3 Mar 20, 2025
6287f9d
merge main to branch
moonbox3 Mar 25, 2025
f4b22b5
agent wip
moonbox3 Mar 25, 2025
2bcc060
wip
moonbox3 Mar 25, 2025
4fa402f
Merge main to branch
moonbox3 Mar 25, 2025
245ebfb
wip
moonbox3 Mar 26, 2025
5653d26
responses agent updates
moonbox3 Mar 26, 2025
e2367a4
Mypy updates
moonbox3 Mar 26, 2025
a2db8ad
Merge branch 'main' into openai-response-agent
moonbox3 Mar 27, 2025
6a68854
Add support for OpenAI Responses API as an Agent
moonbox3 Mar 27, 2025
833a2a8
Merge branch 'main' into openai-response-agent
moonbox3 Mar 27, 2025
9f1413a
PR feedback
moonbox3 Mar 31, 2025
c6eec9f
Add responses thread actions tests.
moonbox3 Mar 31, 2025
aa2185a
Updates and cleanup
moonbox3 Apr 2, 2025
16274e7
Merge branch 'main' into openai-response-agent
moonbox3 Apr 2, 2025
706e17e
Handle yielding content via callback
moonbox3 Apr 2, 2025
5cb03e1
Merge branch 'openai-response-agent' of github.com:moonbox3/semantic-…
moonbox3 Apr 2, 2025
45f41d4
Add sample output
moonbox3 Apr 2, 2025
ae8e2d9
Update README
moonbox3 Apr 2, 2025
99f2bbb
PR feedback
moonbox3 Apr 2, 2025
0fe4100
Improve non-complete polling so we don't poll forever
moonbox3 Apr 2, 2025
f98b601
Fix mypy errors
moonbox3 Apr 2, 2025
7c7adf6
Merge branch 'main' into openai-response-agent
moonbox3 Apr 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion python/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ dependencies = [
"numpy >= 1.25.0; python_version < '3.12'",
"numpy >= 1.26.0; python_version >= '3.12'",
# openai connector
"openai ~= 1.61",
"openai >= 1.67",
# openapi and swagger
"openapi_core >= 0.18,<0.20",
"websockets >= 13, < 16",
Expand Down
10 changes: 10 additions & 0 deletions python/samples/concepts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,15 @@
- [OpenAI Assistant Templating Streaming](./agents/openai_assistant/openai_assistant_templating_streaming.py)
- [OpenAI Assistant Vision Streaming](./agents/openai_assistant/openai_assistant_vision_streaming.py)

#### [OpenAI Responses Agent](../../semantic_kernel/agents/open_ai/openai_responses_agent.py)

- [OpenAI Responses Message Callback Streaming](./agents/openai_responses/responses_agent_message_callback_streaming.py)
- [OpenAI Responses Message Callback](./agents/openai_responses/responses_agent_message_callback.py)
- [OpenAI Responses File Search Streaming](./agents/openai_responses/responses_agent_file_search_streaming.py)
- [OpenAI Responses Plugins Streaming](./agents/openai_responses/responses_agent_plugins_streaming.py)
- [OpenAI Responses Reuse Existing Thread ID](./agents/openai_responses/responses_agent_reuse_existing_thread_id.py)
- [OpenAI Responses Web Search Streaming](./agents/openai_responses/responses_agent_web_search_streaming.py)

### Audio - Using services that support audio-to-text and text-to-audio conversion

- [Chat with Audio Input](./audio/01-chat_with_audio_input.py)
Expand Down Expand Up @@ -157,6 +166,7 @@

- [Cycles with Fan-In](./processes/cycles_with_fan_in.py)
- [Nested Process](./processes/nested_process.py)
- [Plan and Execute](./processes/plan_and_execute.py)

### PromptTemplates - Using [`Templates`](https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/prompt_template/prompt_template_base.py) with parametrization for `Prompt` rendering

Expand Down
7 changes: 6 additions & 1 deletion python/samples/concepts/agents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This project contains a step by step guide to get started with _Semantic Kernel
- For the use of Streaming OpenAI Assistant agents, the minimum allowed Semantic Kernel pypi version is 1.11.0.
- For the use of AzureAI and Bedrock agents, the minimum allowed Semantic Kernel pypi version is 1.21.0.
- For the use of Crew.AI as a plugin, the minimum allowed Semantic Kernel pypi version is 1.21.1.
- For the use of OpenAI Responses agents, the minimum allowed Semantic Kernel pypi version is 1.27.0.


## Source
Expand All @@ -28,6 +29,7 @@ chat_completion_agent|How to use Semantic Kernel Chat Completion agents that lev
bedrock|How to use [AWS Bedrock agents](https://aws.amazon.com/bedrock/agents/) in Semantic Kernel.
mixed_chat|How to combine different agent types.
openai_assistant|How to use [OpenAI Assistants](https://platform.openai.com/docs/assistants/overview) in Semantic Kernel.
openai_responses|How to use [OpenAI Responses](https://platform.openai.com/docs/api-reference/responses) in Semantic Kernel.

## Configuring the Kernel

Expand Down Expand Up @@ -57,6 +59,9 @@ You can explicitly create a specific implementation for the desired `Agent` that
Below is a sample code snippet demonstrating thread management:

```python
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion

USER_INPUTS = [
"Why is the sky blue?",
]
Expand All @@ -71,7 +76,7 @@ agent = ChatCompletionAgent(
# 2. Create a thread to hold the conversation
# If no thread is provided, a new thread will be
# created and returned with the initial response
thread: ChatCompletionAgentThread = None
thread = None

for user_input in USER_INPUTS:
print(f"# User: {user_input}")
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Copyright (c) Microsoft. All rights reserved.
import asyncio
import os

from semantic_kernel.agents import OpenAIResponsesAgent
from semantic_kernel.contents.streaming_chat_message_content import StreamingChatMessageContent

"""
The following sample demonstrates how to create an OpenAI Responses Agent.
The sample shows how to have the agent answer questions about the provided
document with streaming responses.

The interaction with the agent is via the `get_response` method, which sends a
user input to the agent and receives a response from the agent. The conversation
history is maintained by the agent service, i.e. the responses are automatically
associated with the thread. Therefore, client code does not need to maintain the
conversation history.
"""


# Simulate a conversation with the agent
USER_INPUTS = [
"By birthday, who is the youngest employee?",
"Who works in sales?",
"I have a customer request, who can help me?",
]


async def main():
# 1. Create the client using OpenAI resources and configuration
client, model = OpenAIResponsesAgent.setup_resources()

pdf_file_path = os.path.join(
os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "resources", "employees.pdf"
)

with open(pdf_file_path, "rb") as file:
file = await client.files.create(file=file, purpose="assistants")

vector_store = await client.vector_stores.create(
name="step4_assistant_file_search",
file_ids=[file.id],
)

file_search_tool = OpenAIResponsesAgent.configure_file_search_tool(vector_store.id)

# 2. Create a Semantic Kernel agent for the OpenAI Responses API
agent = OpenAIResponsesAgent(
ai_model_id=model,
client=client,
instructions="Find answers to the user's questions in the provided file.",
name="FileSearch",
tools=[file_search_tool],
)

# 3. Create a thread for the agent
# If no thread is provided, a new thread will be
# created and returned with the initial response
thread = None

response_chunks: list[StreamingChatMessageContent] = []
for user_input in USER_INPUTS:
print(f"# User: '{user_input}'")
# 4. Invoke the agent for the current message and print the response
first_chunk = True
async for response in agent.invoke_stream(messages=user_input, thread=thread):
thread = response.thread
response_chunks.append(response)
if first_chunk:
print(f"# {response.name}: ", end="", flush=True)
first_chunk = False
print(response.content, end="", flush=True)
print()

"""
# User: 'By birthday, who is the youngest employee?'
# Agent: The youngest employee by birthday is Teodor Britton, born on January 9, 1997.
# User: 'Who works in sales?'
# Agent: The employees who work in sales are:

- Mariam Jaslyn, Sales Representative
- Hicran Bea, Sales Manager
- Angelino Embla, Sales Representative.
# User: 'I have a customer request, who can help me?'
# Agent: For a customer request, you could reach out to the following people in the sales department:

- Mariam Jaslyn, Sales Representative
- Hicran Bea, Sales Manager
- Angelino Embla, Sales Representative.
"""


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# Copyright (c) Microsoft. All rights reserved.
import asyncio
from typing import Annotated

from semantic_kernel.agents import AzureResponsesAgent
from semantic_kernel.contents import AuthorRole, FunctionCallContent, FunctionResultContent
from semantic_kernel.contents.chat_message_content import ChatMessageContent
from semantic_kernel.functions import kernel_function

"""
The following sample demonstrates how to create an OpenAI
Responses Agent using either Azure OpenAI or OpenAI. The
Responses Agent allow for function calling, the use of file search and a
web search tool. Responses Agent Threads are used to manage the
conversation state, similar to a Semantic Kernel Chat History.
Additionally, the invoke configures a message callback
to receive the conversation messages during invocation.
"""


# Define a sample plugin for the sample
class MenuPlugin:
"""A sample Menu Plugin used for the concept sample."""

@kernel_function(description="Provides a list of specials from the menu.")
def get_specials(self) -> Annotated[str, "Returns the specials from the menu."]:
return """
Special Soup: Clam Chowder
Special Salad: Cobb Salad
Special Drink: Chai Tea
"""

@kernel_function(description="Provides the price of the requested menu item.")
def get_item_price(
self, menu_item: Annotated[str, "The name of the menu item."]
) -> Annotated[str, "Returns the price of the menu item."]:
return "$9.99"


intermediate_steps: list[ChatMessageContent] = []


async def handle_intermediate_steps(message: ChatMessageContent) -> None:
intermediate_steps.append(message)


async def main():
# 1. Create the client using Azure OpenAI resources and configuration
client, model = AzureResponsesAgent.setup_resources()

# 2. Create a Semantic Kernel agent for the OpenAI Responses API
agent = AzureResponsesAgent(
ai_model_id=model,
client=client,
name="Host",
instructions="Answer questions about the menu.",
plugins=[MenuPlugin()],
)

# 3. Create a thread for the agent
# If no thread is provided, a new thread will be
# created and returned with the initial response
thread = None

user_inputs = ["Hello", "What is the special soup?", "What is the special drink?", "How much is that?", "Thank you"]

try:
for user_input in user_inputs:
print(f"# {AuthorRole.USER}: '{user_input}'")
async for response in agent.invoke(
messages=user_input,
thread=thread,
on_intermediate_message=handle_intermediate_steps,
):
thread = response.thread
print(f"# {response.name}: {response.content}")
finally:
await thread.delete() if thread else None

# Print the final chat history
print("\nIntermediate Steps:")
for msg in intermediate_steps:
if any(isinstance(item, FunctionResultContent) for item in msg.items):
for fr in msg.items:
if isinstance(fr, FunctionResultContent):
print(f"Function Result:> {fr.result} for function: {fr.name}")
elif any(isinstance(item, FunctionCallContent) for item in msg.items):
for fcc in msg.items:
if isinstance(fcc, FunctionCallContent):
print(f"Function Call:> {fcc.name} with arguments: {fcc.arguments}")
else:
print(f"{msg.role}: {msg.content}")

"""
Sample Output:

# AuthorRole.USER: 'Hello'
# Host: Hi there! How can I assist you with the menu today?
# AuthorRole.USER: 'What is the special soup?'
# Host: The special soup is Clam Chowder.
# AuthorRole.USER: 'What is the special drink?'
# Host: The special drink is Chai Tea.
# AuthorRole.USER: 'How much is that?'
# Host: Could you please specify the menu item you are asking about?
# AuthorRole.USER: 'Thank you'
# Host: You're welcome! If you have any questions about the menu or need assistance, feel free to ask.

Intermediate Steps:
AuthorRole.ASSISTANT: Hi there! How can I assist you with the menu today?
AuthorRole.ASSISTANT:
Function Result:>
Special Soup: Clam Chowder
Special Salad: Cobb Salad
Special Drink: Chai Tea
for function: MenuPlugin-get_specials
AuthorRole.ASSISTANT: The special soup is Clam Chowder.
AuthorRole.ASSISTANT:
Function Result:>
Special Soup: Clam Chowder
Special Salad: Cobb Salad
Special Drink: Chai Tea
for function: MenuPlugin-get_specials
AuthorRole.ASSISTANT: The special drink is Chai Tea.
AuthorRole.ASSISTANT: Could you please specify the menu item you are asking about?
AuthorRole.ASSISTANT: You're welcome! If you have any questions about the menu or need assistance, feel free to ask.
"""


if __name__ == "__main__":
asyncio.run(main())
Loading
Loading