RAG with custom tools and functionalities #3678
Replies: 1 comment
-
Hey @juancastagnino, happy to hear you are trying out Agno!
If the final intention of the
Depending on the complexity of that grounding check, the Team leader should be able to handle it itself. You can clarify this in the Team's instructions. Else you can indeed create a new Agent focused on that task, and add it to the Team. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We use complex rag orchestrations using langgraph or semantic kernel framework. Usually the workflow is a triage to route the conversation for greetings, off_topic questions etc to generate automatic answers and question that need retrieval of sources from azure search indexes. We also have functionalities to generate filter strings to filter results in azure search, grounding etc.. and even natural language to sql query tools.
I recently came across Agno and want to try implement an orchestration with it. Im a bit confuse on how to handle inputs to the team agent and between the members of the team. With langgraph and semantic we have the orchestration flow inside a function 'get_answer' that receives certain parameters and simply call it to get and answer dictionary to send the response to frontend and history is saved to dbs outside the flow of the conversation.. like.. answer_dict = get_answer(question, chat_history).
I was thinking on a simple team of agents.. with filter_generator_agent and azure_search_agent (with retrieval tool) but need a bit of help on how to handle this workflow and inputs on each step with Agno. would you use a triage_agent, and final_response_agent with grounding check.. or leave this to the main coordinator agent?
heres my basic setup for now:
team_leader = Team(
name="RAG Team",
mode="coordinate",
model=AzureOpenAI(id="gpt-4o"),
members=[conversation_sumary_agent, filter_generator_agent],
tools=[azure_retrieval_tool],
instructions=dedent("""
You are an experienced assistant that answers questions
Follow these steps when answering the questions:
1. Sumarize the last rounds of the chat history.
2. Based on the user question and history sumary, check the question intent. If is a greeting, about_bot or off_topic questions generate a response automaticly.
Use the history sumary to check if the question is a follow up question and if previous filtering was used in azure search retrievals.
3. Generate an optimized search_query based on the user question and history sumary to retrieve sources from azure search index.
4. If the user question or history sumary states the need to filter results in azure search by certain category value, use the filter_generation agent to get the filter_string required in azure search.
5. Always cite your sources with links
6. If the information needed to answer the question is not present in the retrieved sources, say you dont have enough information to answer the question.
Your style guide:
- Present information in a clear, professional style
- Use bullet points for key takeaways"""),
markdown=True,
show_members_responses=True,
enable_agentic_context=True,
add_datetime_to_instructions=True,
success_criteria="The team has successfully completed the task.",
async def get_answer(history, question):
chat_rounds = clear_html(json.dumps(get_last_messages(history, 6), ensure_ascii=False))
chat_history = str(chat_rounds)
answer_dict = await team_leader.print_response(question, chat_history, stream=False, stream_intermediate_steps=False, show_full_reasoning=True)
return answer_dict
I notice that parsing the question and history as input to the team_leader thows an error. But when passing only the question it seems to work.. although I did't fully configured the azure openai model yet.
TypeError: Team.print_response() takes from 1 to 2 positional arguments but 3 positional arguments (and 3 keyword-only arguments) were given
Thanks
Beta Was this translation helpful? Give feedback.
All reactions