Teams use Portkey to improve the cost, performance, and accuracy of their Gen AI apps.
It takes <2 mins to integrate and with that, it already starts monitoring all of your LLM requests and also makes your app resilient, secure, performant, and more accurate at the same time.
Here's a product walkthrough (3 mins):
Integrate in 3 Lines of Code
# pip install portkey-aifrom openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersclient =OpenAI( base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="openai", api_key="PORTKEY_API_KEY" ))chat_complete = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Say this is a test"}],)print(chat_complete.choices[0].message.content)
// npm i portkey-aiimport OpenAI from'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from'portkey-ai'constopenai=newOpenAI({ baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ provider:"openai", apiKey:"PORTKEY_API_KEY" })});asyncfunctionmain() {constchatCompletion=awaitopenai.chat.completions.create({ messages: [{ role:'user', content:'Say this is a test' }], model:'gpt-3.5-turbo', });console.log(chatCompletion.choices);}main();
Languages Supported
AI Providers Supported
Portkey is multimodal by default - along with chat and text models, we also support audio, vision, and image generation models.