Comment on page
🚀
Getting Started
Discover the ease and simplicity of integrating Portkey.

Integrate Portkey's API within client SDKs of OpenAI, Anthropic, etc, or alternatively, invoke it through a direct cURL request. If you want to quickly experiment with Portkey and explore its value and capabilities, this is the method to get started!
OpenAI (Python)
Anthropic (Python)
cURL
from openai import OpenAI
client = OpenAI(
api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
base_url="https://api.portkey.ai/v1/proxy",
default_headers= {
"x-portkey-api-key": "PORTKEY_API_KEY",
"x-portkey-mode": "proxy openai",
"Content-Type": "application/json"
}
)
chat_complete = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
)
print(chat_complete.choices[0].message.content)
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
anthropic = Anthropic(
api_key="ANTHROPIC_API_KEY",
base_url="https://api.portkey.ai/v1/proxy",
default_headers={
"x-portkey-api-key": "PORTKEY_API_KEY",
"x-portkey-mode": "proxy anthropic",
}
)
r = anthropic.completions.create(
model="claude-2",
max_tokens_to_sample=300,
prompt=f"{HUMAN_PROMPT} how does a court case get to the Supreme Court? {AI_PROMPT}",
)
print(r.completion)
Portkey supports Mistral & Llama 2 models through Anyscale endpoints. Here's an example call:
curl 'https://api.portkey.ai/v1/chatComplete' \
-H 'x-portkey-api-key: PORTKEY_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"config": {
"provider": "anyscale",
"api_key": "ANYSCALE_API_KEY"
},
"params": {
"messages": [{"role": "user","content":"What are the ten tallest buildings in India?"}],
"model": "mistralai/Mistral-7B-Instruct-v0.1"
}
}'
For OpenAI:
curl 'https://api.portkey.ai/v1/chatComplete' \
-H 'x-portkey-api-key: PORTKEY_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"config": {
"provider": "openai",
"api_key": "OPENAI_API_KEY"
},
"params": {
"messages": [{"role": "user","content":"What are the ten tallest buildings in India?"}],
"model": "gpt-4"
}
}'
The best way to interact with Portkey and bring your LLMs to production. Use the same params you use for your LLM calls with OpenAI/Anthropic etc, and make them interoperable while adding production features like fallbacks, load balancing, a/b tests, caching, and more.
Python
Node
# pip install -U portkey-ai
import portkey
from portkey import Config, LLMOptions
# Construct the Portkey Config
portkey.config = Config(
api_key="PORTKEY_API_KEY",
mode="single",
llms=LLMOptions(provider="openai", api_key="YOUR_OPENAI_API_KEY")
)
r = portkey.ChatCompletions.create(
model="gpt-4",
messages=[
{"role": "user","content": "What is the meaning of life, universe and everything?"}
]
)
// npm i portkey-ai
import { Portkey } from "portkey-ai";
const portkey = new Portkey({
api_key: "PORTKEY_API_KEY",
mode: "single",
llms: [{ provider: "openai", virtual_key: "open-ai-xxx" }]
});
async function main() {
const r = await portkey.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say this is a test' }]
});
};
main();
Portkey offers deep, user-friendly integrations with platforms like Langchain & Llamaindex, allowing you to utilize Portkey’s capabilities within these environments effortlessly.
Last modified 4d ago