Skip to content

Conversation

antoniibelyshev
Copy link
Collaborator

@antoniibelyshev antoniibelyshev commented Jun 10, 2025

Added executeMultipleReplies method to the prompt executor and clients to support LLM requests with multiple replies.

Implemented multi-reply handling in the OpenAI client.

Introduced a reply choice strategy and dedicated prompt executor.

Created dedicated prompt executor and nodes to receive and handle multiple replies in an agent.


Type of the change

  • New feature
  • Bug fix
  • Documentation fix

Checklist for all pull requests

  • The pull request has a description of the proposed change
  • I read the Contributing Guidelines before opening the pull request
  • The pull request uses develop as the base branch
  • Tests for the changes have been added
  • All new and existing tests passed
Additional steps for pull requests adding a new feature
  • An issue describing the proposed change exists
  • The pull request includes a link to the issue
  • The change was discussed and approved in the issue
  • Docs have been added / updated

@@ -6,6 +6,8 @@ import ai.koog.prompt.llm.LLModel
import ai.koog.prompt.message.Message
import kotlinx.coroutines.flow.Flow

public typealias LLMReply = List<Message.Response>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe LLMChoice? WDYT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both are reasonable. LLMChoice would suggest that one should choose from the responses, while afaiu one of the use cases is to use all responses, that's why I though it's better to use LLMReply and not focus only on choice use case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't they called choices in OpenAi?

# Conflicts:
#	prompt/prompt-executor/prompt-executor-clients/src/commonMain/kotlin/ai/koog/prompt/executor/clients/LLMClient.kt
# Conflicts:
#	prompt/prompt-executor/prompt-executor-clients/src/commonMain/kotlin/ai/koog/prompt/executor/clients/LLMClient.kt
…le-replies' into belyshev/llm-request-with-multiple-replies
Copy link
Collaborator

@Ololoshechkin Ololoshechkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antoniibelyshev please fix my comments above, and after that -- please feel free to merge this change

import ai.koog.prompt.llm.LLModel
import ai.koog.prompt.message.Message

public class PromptExecutorChoice(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add KDoc here and to all other public API

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let’s actually also rename it to PromptExecutorWithChoiceSelection , wdyt?

import ai.koog.prompt.dsl.Prompt
import ai.koog.prompt.executor.model.LLMChoice

public class DummyChoiceStrategy : ChoiceStrategy {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add KDoc. And also let's call it not Dummy but something like ChoiceStrategy.FirstChoice
Also let's make it an object and move inside ChoiceStrategy.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is to call it just "Default" so that users would access it as ChoiceStrategy.Default.

Another note -- maybe rename ChoiceStrategy to ChoiceSelectionStrategy? Wdyt?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or LLMChoiceSelectionStrategy even? Or is it too long?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The names for the override classes will be even longer :) I think ChoiceSelectionStrategy will be enough to understand the idea behind the interface

* @property print A function responsible for displaying messages to the user, e.g., for showing prompts or feedback.
* @property read A function to capture user input.
*/
public class AskUserChoiceStrategy(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it needed in public API?
Maybe let's move it to tests or examples?

edge(nodeSendToolResult forwardTo nodeExecuteTool onToolCall { true })
}

val askChoiceStrategy = AskUserChoiceStrategy(promptShowToUser = { prompt ->
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move the whole AskUserChoiceStrategy to this chess example

val nodeCallLLM by nodeLLMRequest("sendInput")
val nodeExecuteTool by nodeExecuteTool("nodeExecuteTool")
val nodeSendToolResult by nodeLLMSendResultsMultipleChoices("nodeSendToolResult")
val nodeChooseChoice by nodeChoose(askChoiceStrategy, "chooseLLMChoice")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's rename "nodeChoose" to something more verbose like "nodeChooseLLMResponse"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean in the public API also

@antoniibelyshev antoniibelyshev merged commit f526c96 into JetBrains:develop Jun 19, 2025
6 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants