Cerebras

Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including the models hosted on Cerebras Inference API.

Provider Slug: cerebras

Portkey SDK Integration with Cerebras

Portkey provides a consistent API to interact with models from various providers. To integrate Cerebras with Portkey:

1. Install the Portkey SDK

npm install --save portkey-ai

2. Initialize Portkey with Cerebras Virtual Key

To use Cerebras Inference with Portkey, get your API key from here, then add it to Portkey to create the virtual key.

import Portkey from 'portkey-ai'
 
const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    virtualKey: "CEREBRAS_VIRTUAL_KEY" // Your Cerebras Inference virtual key
})

3. Invoke Chat Completions

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama3.1-8b',
});

console.log(chatCompletion.choices);

Supported Models

Cerebras currently supports Llama-3.1-8B and Llama-3.1-70B. You can find more info here:

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You'll find more information in the relevant sections:

Last updated