First create agents # Agent focusing on product features features_agent = Agent( name="FeaturesAgent", instructions="Extract the key product features from the review." ) # Agent focusing on pros & cons pros_cons_agent = Agent( name="ProsConsAgent", instructions="List the pros and cons mentioned in the review." ) # Agent focusing on sentiment analysis sentiment_agent = Agent( name="SentimentAgent", instructions="Summarize the … Continue reading Cook Book Review: Agent SDK Case Parallel Agents
Uncategorized
Cook Book Review: Agentic SDK Use Case
There are a dozen of such reviews since the launch of Agentic SDKs by OpenAI in March 2025. Let's start from the very first one: Automating Dispute Management with Agents SDK and Stripe API by Dan Bell. First need to defines several helper function tools that support the dispute processing workflow. close_dispute automatically closes a Stripe … Continue reading Cook Book Review: Agentic SDK Use Case
Cook Book Review: Prompting etc.
Codex CLI to automatically fix CI failures, use the following YAML shows a GitHub action that auto triggers when CI fails, installs Codex, uses codex exec and then makes a PR on the failing branch with the fix. Replace "CI" with the name of the workflow you want to monitor. GPT-5-Codex Prompting Guide, is a … Continue reading Cook Book Review: Prompting etc.
Use OpenAI Tools: Web Search, Code Interpreter, File Search and Computer Use
Web search is available in the Responses API as the generally available version of the tool, web_search, as well as the earlier tool version, web_search_preview. To use web search in the Chat Completions API, use the specialized web search models gpt-4o-search-preview and gpt-4o-mini-search-preview. Web search is limited to a context window size of 128000 (even with gpt-4.1 and gpt-4.1-mini models). The Code interpreter is … Continue reading Use OpenAI Tools: Web Search, Code Interpreter, File Search and Computer Use
Core Concepts to Use OpenAI Tools: Function Calling
Function calling or tool calling seems same, but we need to discern function tools, custom tools and built-in tools. A function is a specific kind of tool, defined by a JSON schema. In addition to function tools, there are custom tools (described in this guide) that work with free text inputs and outputs. There are … Continue reading Core Concepts to Use OpenAI Tools: Function Calling
Core Concepts to Use OpenAI Tools: Text Generation and Structured Output
Text Generation, Image and vision, Audio and Speech, Structured Output, Function Calling, Using GTP-5 and Migrate to Response API. Text Generation: Prompt engineering: sample codes from openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-5", reasoning={"effort": "low"}, instructions="Talk like a pirate.", # was realized by input array before, there are role of developer, user … Continue reading Core Concepts to Use OpenAI Tools: Text Generation and Structured Output
OpenAI’s Offering to Assist Engineers to Build Agents
Here is a complete, structured list of the capabilities / “tools” OpenAI currently provides for agents (through ChatGPT and the API), summarized by OpenAI itself: 1. Reasoning & General Models GPT models (GPT-4o, GPT-4.1, GPT-o3, GPT-o1, Codex variants) Text, code, multimodal reasoning (text, images, audio). Used as the “core brain” of agents. 2. Code & … Continue reading OpenAI’s Offering to Assist Engineers to Build Agents
Leveraging OpenAI’s Agents to Improve My Agent
Existing Findings: Rigid flow in SimpleAgent.handle_user_request(): Heavy branching between categorization, correction checks, and confirmation loops forces users into strict yes/no responses and long prompts before running tools. Tool confirmation friction: The confirmation block within SimpleAgent.handle_user_request() expects precise inputs; any deviation reruns prompts rather than gracefully extracting intent. LLM prompt templates: Prompts in SimpleAgent.generate_intelligent_analysis() and SimpleAgent.select_tool_for_request() include verbose, formal instructions lacking conversational tone … Continue reading Leveraging OpenAI’s Agents to Improve My Agent
Existing MCP Tools
Here is the comprehensive understanding of all 19 MCP tools. Here's the complete analysis: 📋 Complete MCP Tool Inventory (19 Tools) NSS Tools (4 tools): nss_analyze - News search analysis with entities, keywords, date ranges nss_market_data - Add market data to NSS results nss_composite_score - Calculate composite keyword scores nss_full_pipeline - Complete NSS workflow RBICS … Continue reading Existing MCP Tools
What We’ve Learned From a Year of Building with LLMs by Eugene Yan et al
The report is structured into three levels: Tactical, Operational, and Strategic. Effective LLM development relies on clear prompting strategies, structured design, and robust evaluation. Techniques such as n-shot in-context learning, chain-of-thought reasoning, and well-chosen examples improve model guidance, while schemas, specifications, and metadata ensure consistent inputs and outputs. Breaking tasks into smaller prompts helps models … Continue reading What We’ve Learned From a Year of Building with LLMs by Eugene Yan et al