Instructor is a framework for extracting structured outputs from LLMs, available in Python & JS.
With Portkey, you can confidently take your Instructor pipelines to production and get complete observability over all of your calls + make them reliable - all with a 2 LOC change!
import Instructor from"@instructor-ai/instructor";import OpenAI from"openai";import { z } from"zod";import { PORTKEY_GATEWAY_URL, createHeaders } from"portkey-ai";constportkey=newOpenAI({ baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ apiKey:"PORTKEY_API_KEY", virtualKey:"OPENAI_API_KEY", }),});constclient=Instructor({ client: portkey, mode:"TOOLS",});constUserSchema=z.object({ age:z.number().describe("The age of the user"), name:z.string(),});constuser=awaitclient.chat.completions.create({ messages: [{ role:"user", content:"Jason Liu is 30 years old" }], model:"claude-3-sonnet-20240229",// model: "gpt-4", max_tokens:512, response_model: { schema: UserSchema, name:"User", },});console.log(user);
Caching Your Requests
Let's now bring down the cost of running your Instructor pipeline with Portkey caching. You can just create a Config object where you define your cache setting:
{"cache": {"mode":"simple" }}
You can write it raw, or use Portkey's Config builder and get a corresponding config id. Then, just pass it while instantiating your OpenAI client:
import instructorfrom pydantic import BaseModelfrom openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeaderscache_config ={"cache":{"mode":"simple"}}portkey =OpenAI( base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( virtual_key="OPENAI_VIRTUAL_KEY", api_key="PORTKEY_API_KEY", config=cache_config # Or pass your Config ID saved from Portkey app ))classUser(BaseModel): name:str age:intclient = instructor.from_openai(portkey)user_info = client.chat.completions.create( model="gpt-4-turbo", max_tokens=1024, response_model=User, messages=[{"role": "user", "content": "John Doe is 30 years old."}],)print(user_info.name)print(user_info.age)
import Instructor from"@instructor-ai/instructor";import OpenAI from"openai";import { z } from"zod";import { PORTKEY_GATEWAY_URL, createHeaders } from"portkey-ai";constcache_config= {"cache": {"mode":"simple" }}constportkey=newOpenAI({ baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ apiKey:"PORTKEY_API_KEY", virtualKey:"OPENAI_API_KEY", config: cache_config // Or pass your Config ID saved from Portkey app }),});constclient=Instructor({ client: portkey, mode:"TOOLS",});constUserSchema=z.object({ age:z.number().describe("The age of the user"), name:z.string(),});constuser=awaitclient.chat.completions.create({ messages: [{ role:"user", content:"Jason Liu is 30 years old" }], model:"claude-3-sonnet-20240229",// model: "gpt-4", max_tokens:512, response_model: { schema: UserSchema, name:"User", },});console.log(user);
Similarly, you can add Fallback, Loadbalancing, Timeout, or Retry settings to your Configs and make your Instructor requests robust & reliable.