Back
OpenAI introduces function calling
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This announcement is relevant to AI safety as function calling is a core capability enabling agentic LLM systems that autonomously interact with external tools, raising questions about oversight, reliability, and safe deployment of AI agents.
Metadata
Importance: 55/100blog postnews
Summary
OpenAI announced function calling capabilities for GPT-4 and GPT-3.5-turbo, enabling models to intelligently output structured JSON to call external tools and APIs. The update also included longer context windows, cost reductions, and more steerable model versions, marking a significant expansion of LLM integration into agentic workflows.
Key Points
- •Function calling allows GPT models to detect when to call a function and return structured JSON arguments, enabling reliable external tool integration.
- •Models fine-tuned for function calling: gpt-4-0613 and gpt-3.5-turbo-0613, supporting chatbots, API connectors, and structured data extraction.
- •New 16k context gpt-3.5-turbo variant released alongside 75% cost reduction on embeddings and 25% reduction on gpt-3.5-turbo input tokens.
- •API users can pin specific model versions for stability, as new versions may improve most but not all evaluation metrics.
- •Announcement is foundational for agentic AI systems, where models autonomously invoke external tools based on user intent.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Tool Use and Computer Use | Capability | 67.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202610 KB
OpenAI
June 13, 2023
[Product](https://openai.com/news/product-releases/)
# Function calling and other API updates
We’re announcing updates including more steerable API models, function calling capabilities, longer context, and lower prices.

Loading…
Share
_July 20, 2023 update:_
We previously communicated to developers that `gpt-3.5-turbo-0301`, `gpt-4-0314` and `gpt-4-32k-0314` models were scheduled for sunset on Sept 13, 2023. After reviewing feedback from customers and our community, we are extending support for those models until at least June 13, 2024.
When we release new model versions, our top priority is to make newer models smarter across the board. We are targeting improvements on a large number of axes, such as instruction following, factual accuracy, and refusal behavior. For instance, the `gpt-4-0613` model introduced last month resulted in significant improvement on calling functions.
We look at a large number of evaluation metrics to determine if a new model should be released. While the majority of metrics have improved, there may be some tasks where the performance gets worse. This is why we allow API users to pin the model version. For example, you can use `gpt-4-0314` instead of the generic `gpt-4`, which points to the latest model version. Each individually pinned model is stable, meaning that we won’t make changes that impact the outputs.
We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it. One way to help us ensure new models get better at domains you care about, is to contribute to [the OpenAI Evals library(opens in a new window)](https://github.com/openai/evals/tree/main) to report shortcomings in our models.
We understand that model upgrades and behavior changes can be disruptive to your applications. We are working on ways to give developers more stability and visibility into how we release and deprecate models.
* * *
We released `gpt-3.5-turbo` and `gpt-4` earlier this year, and in only a short few months, have seen [incredible applications](https://openai.com/stories/) built by developers on top of these models.
Today, we’re following up with some exciting updates:
- new function calling capability in the Chat Completions API
- updated and more steerable versions of `gpt-4` and `gpt-3.5-turbo`
- new 16k context version of `gpt-3.5-turbo` (vs the standard 4k version)
- 75% cost reduction on our state-of-the-art embeddings model
- 25% cost reduction on input tokens for `gpt-3.5-turbo`
- announcing the deprecation timeline for the `gpt-3.5-turbo-0301` and `gpt-4-0314` models
All of these models come with the same data privacy and security guarantees
... (truncated, 10 KB total)Resource ID:
461efab2a94bf7c5 | Stable ID: NmEzODg3Ym