Comment on page
Python
Portkey SDK is the best way to interact with Portkey and bring your LLMs to production.
pip install portkey-ai
Portkey API Key: Log into Portkey here, then click on the profile icon on top left and “Copy API Key”.
import os
os.environ["PORTKEY_API_KEY"] = "PORTKEY_API_KEY"
Virtual Keys: Navigate to the "Virtual Keys" page on Portkey and hit the "Add Key" button. Choose your AI provider and assign a unique name to your key. Your virtual key is ready!
Portkey Features: You can find a comprehensive list of Portkey features here. This includes settings for caching, retries, metadata, and more.
Provider Features: Portkey is designed to be flexible. All the features you're familiar with from your LLM provider, like
top_p
, top_k
, and temperature
, can be used seamlessly. Check out the complete list of provider features here.Setting the Prompt Input: You can set the input in two ways. For models like Claude and GPT3, use
prompt
= (str)
, and for models like GPT3.5 & GPT4, use messages
= [array]
.Here's how you can combine everything:
from portkey import LLMOptions
# Portkey Config
provider = "openai"
virtual_key = "key_a"
trace_id = "portkey_sdk_test"
# Model Settings
model = "gpt-4"
temperature = 1
# User Prompt
messages = [{"role": "user", "content": "Who are you?"}]
# Construct LLM
llm = LLMOptions(provider=provider, virtual_key=virtual_key, trace_id=trace_id, model=model, temperature=temperature, messages=messages)
Portkey client's config takes 3 params:
api_key
, mode
, llms
.api_key
: You can set your Portkey API key here or withos.ennviron
as done above.mode
: There are 3 modes - Single, Fallback, Loadbalance.- Single - This is the standard mode. Use it if you do not want Fallback OR Loadbalance features.
- Fallback - Set this mode if you want to enable the Fallback feature.
- Loadbalance - Set this mode if you want to enable the Loadbalance feature.
llms
: This is an array where we pass our LLMs constructed using the LLMOptions constructor.
import portkey
from portkey import Config
portkey.config = Config(mode="single",llms=[llm])
The Portkey client can do
ChatCompletions
and Completions
.Since our LLM is GPT4, we will use ChatCompletions:
response = portkey.ChatCompletions.create()
print(response.choices[0].message)
You have integrated Portkey's Python SDK in just 4 steps!
Feature | Config Key | Value(Type) | Required |
---|---|---|---|
Provider Name | provider | string | ✅ Required |
Model Name | model | string | ✅ Required |
Virtual Key OR API Key | virtual_key or api_key | string | ✅ Required (can be set externally) |
Cache Type | cache_status | simple , semantic | ❔ Optional |
Force Cache Refresh | cache_force_refresh | True , False (Boolean) | ❔ Optional |
Cache Age | cache_age | integer (in seconds) | ❔ Optional |
Trace ID | trace_id | string | ❔ Optional |
Retries | retry | integer [0,5] | ❔ Optional |
Metadata | metadata | ❔ Optional |
Last modified 1mo ago