Comment on page


Portkey SDK is the best way to interact with Portkey and bring your LLMs to production.
pip install portkey-ai

4-Step Guide

Step 1️⃣ : Get your Portkey API Key and your Virtual Keys for AI providers

Portkey API Key: Log into Portkey here, then click on the profile icon on top left and “Copy API Key”.
import os
Virtual Keys: Navigate to the "Virtual Keys" page on Portkey and hit the "Add Key" button. Choose your AI provider and assign a unique name to your key. Your virtual key is ready!

Step 2️⃣ : Construct your LLM, add Portkey features, provider features, and prompt

Portkey Features: You can find a comprehensive list of Portkey features here. This includes settings for caching, retries, metadata, and more.
Provider Features: Portkey is designed to be flexible. All the features you're familiar with from your LLM provider, like top_p, top_k, and temperature, can be used seamlessly. Check out the complete list of provider features here.
Setting the Prompt Input: You can set the input in two ways. For models like Claude and GPT3, use prompt = (str), and for models like GPT3.5 & GPT4, use messages = [array].
Here's how you can combine everything:
from portkey import LLMOptions
# Portkey Config
provider = "openai"
virtual_key = "key_a"
trace_id = "portkey_sdk_test"
# Model Settings
model = "gpt-4"
temperature = 1
# User Prompt
messages = [{"role": "user", "content": "Who are you?"}]
# Construct LLM
llm = LLMOptions(provider=provider, virtual_key=virtual_key, trace_id=trace_id, model=model, temperature=temperature, messages=messages)

Step 3️⃣ : Construct the Portkey Client

Portkey client's config takes 3 params: api_key, mode, llms.
  • api_key: You can set your Portkey API key here or with os.ennviron as done above.
  • mode: There are 3 modes - Single, Fallback, Loadbalance.
    • Single - This is the standard mode. Use it if you do not want Fallback OR Loadbalance features.
    • Fallback - Set this mode if you want to enable the Fallback feature.
    • Loadbalance - Set this mode if you want to enable the Loadbalance feature.
  • llms: This is an array where we pass our LLMs constructed using the LLMOptions constructor.
import portkey
from portkey import Config
portkey.config = Config(mode="single",llms=[llm])

Step 4️⃣ : Let's Call the Portkey Client!

The Portkey client can do ChatCompletions and Completions.
Since our LLM is GPT4, we will use ChatCompletions:
response = portkey.ChatCompletions.create()
You have integrated Portkey's Python SDK in just 4 steps!

📔 Full List of Portkey Config

Config Key
Provider Name
✅ Required
Model Name
✅ Required
Virtual Key OR API Key
virtual_key or api_key
✅ Required (can be set externally)
Cache Type
simple, semantic
❔ Optional
Force Cache Refresh
True, False (Boolean)
❔ Optional
Cache Age
integer (in seconds)
❔ Optional
Trace ID
❔ Optional
integer [0,5]
❔ Optional
json object More info
❔ Optional
Last modified 1mo ago