Cerebras

Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including the models hosted on Cerebras Inference API.

Portkey SDK Integration with Cerebras

Portkey provides a consistent API to interact with models from various providers. To integrate Cerebras with Portkey:

1. Install the Portkey SDK

npm install --save portkey-ai

2. Initialize Portkey with Cerebras

  • Authorization: Pass your Cerebras API key with the Authorization param.

  • custom_host: Set the target Cerebras API URL to https://api.cerebras.ai/v1.

  • provider: Since Cerebras follows the OpenAI schema, set the reference provider as openai.

import Portkey from 'portkey-ai'
 
const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    provider: "openai",
    customHost: "https://api.cerebras.ai/v1",
    Authorization: "Bearer CEREBRAS_API_KEY"
})

3. Invoke Chat Completions

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama3.1-8b',
});

console.log(chatCompletion.choices);

Supported Models

Cerebras currently supports Llama-3.1-8B and Llama-3.1-70B. You can find more info here:

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You'll find more information in the relevant sections:

Last updated