Leveraging OpenAI’s Agents to Improve My Agent

Existing Findings:

  • Rigid flow in SimpleAgent.handle_user_request(): Heavy branching between categorization, correction checks, and confirmation loops forces users into strict yes/no responses and long prompts before running tools.
  • Tool confirmation friction: The confirmation block within SimpleAgent.handle_user_request() expects precise inputs; any deviation reruns prompts rather than gracefully extracting intent.
  • LLM prompt templates: Prompts in SimpleAgent.generate_intelligent_analysis() and SimpleAgent.select_tool_for_request() include verbose, formal instructions lacking conversational tone or adaptive context.
  • Limited natural dialogue: Responses in SimpleMCPClient.interactive_session() echo system states (“[SUCCESS]…”) instead of natural acknowledgments, reducing conversational feel.
  • Underutilized conversation memoryself.conversation_memory stores rich context but isn’t surfaced to guide future responses or shorten interactions.

Recommended Plan:

  • Assess conversation entry points: Map user flows through SimpleMCPClient.interactive_session() and SimpleAgent.handle_user_request() to spot bottlenecks and redundant prompts.
  • Refine confirmation and correction loops: Allow more natural replies (e.g., “yes please”, “use entity …”) and collapse repeated explanations while still honoring corrections.
  • Lighten system messaging: Adjust prompt templates and console prints to provide concise, human-like feedback based on conversation_memory.
  • Pilot adaptive responses: Surface prior context (recent tools, entities, dates) when asking follow-up questions to build continuity.

Recommended Actions

  • Conversation Flow Audit
    Map transitions among SimpleMCPClient.interactive_session(), SimpleAgent.handle_user_request(), and _handle_tool_success() to detect repetitive prompts and opportunities for early exits.
  • Flexible Input Handling
    Introduce lightweight natural-language parsing helpers (regex + keyword heuristics) to interpret free-form confirmations, corrections, and follow-up questions without halting the flow.
  • Prompt Refresh
    Rewrite the few-shot instruction blocks in SimpleAgent.select_tool_for_request() and SimpleAgent.generate_intelligent_analysis() with shorter, friendlier tone and context-aware cues pulled from self.conversation_memory.
  • Adaptive Responses
    Replace bracketed status prints with brief conversational feedback that references remembered context (e.g., “I’ll reuse the same entity list as before unless you change it.”).
  • Memory Integration
    Surface conversation_historycontext_entities, and pending_corrections inside prompts and console output so the agent recalls prior choices and offers smarter defaults.

Cutting the fluff off, OpenAI Agents Architecture (direct functions):

User → OpenAI Agent → Direct Function Calls → Tools (NSS, RBICS, PA, etc.)

What This Means:

  1. No MCP Server Needed
  • Skip mcp_server.py entirely
  • No MCP protocol overhead
  • Direct Python function calls

Register with OpenAI Agent agent = client.beta.assistants.create(
tools=[
{“type”: “function”, “function”: {
“name”: “rbics_with_revenue”,
“description”: “Get RBICS revenue data”,
“parameters”: {
“type”: “object”,
“properties”: {
“entity_ids”: {“type”: “array”, “items”: {“type”: “string”}},
“day”: {“type”: “string”}
}
}
}}
]
)

Convert Your Tools to Direct Functions Instead of MCP tool handlers, just regular async functions async def rbics_with_revenue(entity_ids: List[str], day: str) -> dict:
“””Get RBICS revenue data for companies”””
# Direct database calls or API calls
with DatabaseConnection() as conn:
# Your existing logic from tools/rbics_query/core.py
return get_rbics_revenue_data(entity_ids, day) async def get_universe_data(day: str = None, is_primary: bool = True) -> dict:
“””Fetch primary listed data with market information”””
# Direct call to your existing logic
return get_ff_primary_listed_data(day, is_primary)

Note OpenAI Agents use streaming for real-time responses, especially when calling functions.

OpenAI Agents Streaming Flow:

File paths: Return file locations in function results

Event Types You’ll Encounter Example streaming events thread.created
run.created
run.in_progress
run.requires_action # ← Function calls happen here
tool_calls.created
tool_calls.in_progress
tool_calls.completed
run.completed
message.created
message.completed

Function Call Streaming Pattern async def handle_stream(stream):
async for event in stream:
if event.event == “thread.run.requires_action”:
# Agent wants to call your functions
tool_calls = event.data.required_action.submit_tool_outputs.tool_calls # Execute your functions tool_outputs = [] for call in tool_calls: if call.function.name == "rbics_with_revenue": result = await rbics_with_revenue(**json.loads(call.function.arguments)) tool_outputs.append({ "tool_call_id": call.id, "output": json.dumps(result) }) # Submit results back to agent await client.beta.threads.runs.submit_tool_outputs( thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs, stream=True # Continue streaming ) elif event.event == "thread.message.delta": # Stream agent's response text print(event.data.delta.content[0].text.value, end="")

Your Long-Running Functions Since your financial tools can take time (database queries, file generation), you need to handle: async def rbics_with_revenue(entity_ids: List[str], day: str) -> dict:
# This might take 10-30 seconds for large datasets
print(“[EXECUTING] Fetching RBICS revenue data…”) # Your existing logic here
result_df = await get_rbics_data(entity_ids, day) # Generate file
output_path = save_to_csv(result_df) return {
“status”: “success”,
“records”: len(result_df),
“file_path”: str(output_path),
“preview”: result_df.head().to_dict(‘records’)
}

User Experience Considerations User sees real-time updates: User: “Get revenue data for Apple and Microsoft” Agent: “I’ll fetch the RBICS revenue data for those companies…”
[EXECUTING] Fetching RBICS revenue data… # ← Your function output
Agent: “Found 150 records. I’ve saved the data to revenue_data_20241201.csv. Here’s a preview…” Key Considerations:

Handle timeouts: Long database queries might timeout

Progress updates: Consider yielding progress for long operations

Error handling: Stream errors gracefully

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.