Function calling or tool calling seems same, but we need to discern function tools, custom tools and built-in tools. A function is a specific kind of tool, defined by a JSON schema. In addition to function tools, there are custom tools (described in this guide) that work with free text inputs and outputs. There are also built-in tools that are part of the OpenAI platform. These tools enable the model to search the web, execute code, access the functionality of an MCP server, and more.
This end-to-end tool calling flow for a get_horoscope function that gets a daily horoscope for an astrological sign illustrates Function calling.
from openai import OpenAI
import json
client = OpenAI()
# 1. Define a list of callable tools for the model
tools = [
{
"type": "function",
"name": "get_horoscope",
"description": "Get today's horoscope for an astrological sign.",
"parameters": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "An astrological sign like Taurus or Aquarius",
},
},
"required": ["sign"],
},
},
]
def get_horoscope(sign):
return f"{sign}: Next Tuesday you will befriend a baby otter."
# Create a running input list we will add to over time
input_list = [
{"role": "user", "content": "What is my horoscope? I am an Aquarius."}
]
# 2. Prompt the model with tools defined
response = client.responses.create(
model="gpt-5",
tools=tools,
input=input_list,
)
# Save function call outputs for subsequent requests
input_list += response.output
for item in response.output:
if item.type == "function_call":
if item.name == "get_horoscope":
# 3. Execute the function logic for get_horoscope
horoscope = get_horoscope(json.loads(item.arguments))
# 4. Provide function call results to the model
input_list.append({
"type": "function_call_output",
"call_id": item.call_id,
"output": json.dumps({
"horoscope": horoscope
})
})
print("Final input:")
print(input_list)
response = client.responses.create(
model="gpt-5",
instructions="Respond only with a horoscope generated by a tool.",
tools=tools,
input=input_list,
)
# 5. The model should be able to give a response!
print("Final output:")
print(response.model_dump_json(indent=2))
print("\n" + response.output_text)
A function definition has the following properties:
| Field | Description |
|---|---|
type | This should always be function |
name | The function’s name (e.g. get_weather) |
description | Details on when and how to use the function |
parameters | JSON schema defining the function’s input arguments |
strict | Whether to enforce strict mode for the function call |
Streaming can be used to surface progress by showing which function is called as the model fills its arguments, and even displaying the arguments in real time. Instead of waiting for the full JSON, the assistant streams partial arguments as it forms them:
{ "name": "get_weather", "arguments": { "city": "N
{ "name": "get_weather", "arguments": { "city": "New
{ "name": "get_weather", "arguments": { "city": "New York" }
# how was it set up, streaming
stream = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "What's the weather like in Paris today?"}],
tools=tools,
stream=True
Adding some content about migrating to GPT-5, old parameters such as temperature, top_n and logprobs are not supported anymore. And Migrating from Chat Completions to Responses API:
The Responses API is a unified interface for building powerful, agent-like applications. It contains:
- Built-in tools like web search, file search , computer use, code interpreter, and remote MCPs.
- Seamless multi-turn interactions that allow you to pass previous responses for higher accuracy reasoning results.
- Native multimodal support for text and images.