Portkey's AI Gateway supports function calling capabilities that many foundational model providers offer. In the API call you can describe functions and the model can choose to output text or this function name with parameters.
Functions Usage
Portkey supports the OpenAI signature to define functions as part of the API request. The tools parameter accepts functions which can be sent specifically for models that support function/tool calling.
import Portkey from'portkey-ai';// Initialize the Portkey clientconstportkey=newPortkey({ apiKey:"PORTKEY_API_KEY",// Replace with your Portkey API key virtualKey:"VIRTUAL_KEY"// Add your provider's virtual key});// Generate a chat completion with streamingasyncfunctiongetChatCompletionFunctions(){constmessages= [{"role":"user","content":"What's the weather like in Boston today?"}];consttools= [ {"type":"function","function": {"name":"get_current_weather","description":"Get the current weather in a given location","parameters": {"type":"object","properties": {"location": {"type":"string","description":"The city and state, e.g. San Francisco, CA", },"unit": {"type":"string","enum": ["celsius","fahrenheit"]}, },"required": ["location"], }, } } ];constresponse=awaitportkey.chat.completions.create({ model:"gpt-3.5-turbo", messages: messages, tools: tools, tool_choice:"auto", });console.log(response)}awaitgetChatCompletionFunctions();
from portkey_ai import Portkey# Initialize the Portkey clientportkey =Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key virtual_key="VIRTUAL_KEY"# Add your provider's virtual key)tools = [{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA",},"unit":{"type":"string","enum": ["celsius","fahrenheit"]},},"required": ["location"],},}}]messages = [{"role":"user","content":"What's the weather like in Boston today?"}]completion = portkey.chat.completions.create( model="gpt-3.5-turbo", messages=messages, tools=tools, tool_choice="auto")print(completion)
import OpenAI from'openai'; // We're using the v4 SDKimport { PORTKEY_GATEWAY_URL, createHeaders } from'portkey-ai'constopenai=newOpenAI({ apiKey:'OPENAI_API_KEY',// defaults to process.env["OPENAI_API_KEY"], baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ provider:"openai", apiKey:"PORTKEY_API_KEY"// defaults to process.env["PORTKEY_API_KEY"] })});// Generate a chat completion with streamingasyncfunctiongetChatCompletionFunctions(){constmessages= [{"role":"user","content":"What's the weather like in Boston today?"}];consttools= [ {"type":"function","function": {"name":"get_current_weather","description":"Get the current weather in a given location","parameters": {"type":"object","properties": {"location": {"type":"string","description":"The city and state, e.g. San Francisco, CA", },"unit": {"type":"string","enum": ["celsius","fahrenheit"]}, },"required": ["location"], }, } } ];constresponse=awaitopenai.chat.completions.create({ model:"gpt-3.5-turbo", messages: messages, tools: tools, tool_choice:"auto", });console.log(response)}awaitgetChatCompletionFunctions();
from openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersopenai =OpenAI( api_key='OPENAI_API_KEY', base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="openai", api_key="PORTKEY_API_KEY" ))tools = [{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA",},"unit":{"type":"string","enum": ["celsius","fahrenheit"]},},"required": ["location"],},}}]messages = [{"role":"user","content":"What's the weather like in Boston today?"}]completion = openai.chat.completions.create( model="gpt-3.5-turbo", messages=messages, tools=tools, tool_choice="auto")print(completion)
curl"https://api.portkey.ai/v1/chat/completions" \-H"Content-Type: application/json" \-H"x-portkey-api-key: $PORTKEY_API_KEY" \-H"x-portkey-provider: openai" \-H"Authorization: Bearer $OPENAI_API_KEY" \-d'{ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "What is the weather like in Boston?" } ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "tool_choice": "auto"}'
On completion, the request will get logged in the logs UI where the tools and functions can be viewed. Portkey will automatically format the JSON blocks in the input and output which makes a great debugging experience.
Managing Functions and Tools in Prompts
Portkey's Prompt Library supports creating prompt templates with function/tool definitions, as well as letting you set the tool choice param. Portkey will also validate your tool definition on the fly, eliminating syntax errors.
Supported Providers and Models
The following providers are supported for function calling with more providers getting added soon. Please raise a request or a PR to add model or provider to the AI gateway.