Function calling is now available in Azure OpenAI Service and gives the latest 0613 versions of gpt-35-turbo and gpt-4 the ability to produce structured JSON outputs based on functions that you describe in the request. This provides a native way for these models to formulate API calls and structure data outputs, all based on the functions you specify. It’s important to note that while the models can generate these calls, it’s up to you to execute them, ensuring you remain in control.
The latest versions of gpt-35-turbo and gpt-4 have been fine-tuned to learn how to work with functions. If one or more functions are specified in the request, the model will then determine if any of the functions should be called based on the context of the prompt. When the model determines that a function should be called, it will then respond with a JSON object including the arguments for the function.
At a high level you can break down working with functions into three steps:
Step #1 — Call the chat completions API with your functions and the user’s input
Step #2 — Use the model’s response to call your API or function
Step #3– Call the chat completions API again, including the response from your function to get a final response
Check out the blog post here for examples of different scenarios where function calling could be helpful and how to use functions safely and securely.
You can also check out our samples to try out an end-to-end example of function calling.
You can get started with function calling today if you have access to the Azure OpenAI Service.
Here are some steps to help you get up and running:
Make sure you’re using the latest model version for gpt-35-turbo and gpt-4 (see this docs page on how to deploy new model versions)
Apply now for access to Azure OpenAI Service, if you don’t already have access.
Review the new documentation on function calling.