Portkey helps bring Anyscale APIs to production with its abstractions for observability, fallbacks, caching, and more. Use the Anyscale API through Portkey for:
Enhanced Logging: Track API usage with detailed insights.
Production Reliability: Automated fallbacks, load balancing, and caching.
Continuous Improvement: Collect and apply user feedback.
Enhanced Fine-Tuning: Combine logs & user feedback for targetted fine-tuning.
Switch to Portkey Gateway URL:https://api.portkey.ai/v1/proxy
See full logs of requests (latency, cost, tokens)—and dig deeper into the data with their analytics suite.
""" OPENAI PYTHON SDK """import openaiPORTKEY_GATEWAY_URL ="https://api.portkey.ai/v1"PORTKEY_HEADERS ={'Authorization':'Bearer ANYSCALE_KEY','Content-Type':'application/json',# **************************************'x-portkey-api-key':'PORTKEY_API_KEY',# Get from https://app.portkey.ai/,'x-portkey-provider':'anyscale'# Tell Portkey that the request is for Anyscale# **************************************}client = openai.OpenAI(base_url=PORTKEY_GATEWAY_URL, default_headers=PORTKEY_HEADERS)response = client.chat.completions.create( model="mistralai/Mistral-7B-Instruct-v0.1", messages=[{"role": "user", "content": "Say this is a test"}])print(response.choices[0].message.content)
""" OPENAI NODE SDK """import OpenAI from'openai';constPORTKEY_GATEWAY_URL="https://api.portkey.ai/v1"constPORTKEY_HEADERS= {'Authorization':'Bearer ANYSCALE_KEY','Content-Type':'application/json',// **************************************'x-portkey-api-key':'PORTKEY_API_KEY',// Get from https://app.portkey.ai/,'x-portkey-provider':'anyscale'// Tell Portkey that the request is for Anyscale// **************************************}constopenai=newOpenAI({baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:PORTKEY_HEADERS});asyncfunctionmain() {constchatCompletion=awaitopenai.chat.completions.create({ messages: [{ role:'user', content:'Say this is a test' }], model:'mistralai/Mistral-7B-Instruct-v0.1', });console.log(chatCompletion.choices[0].message.content);}main();
""" REQUESTS LIBRARY """import requestsPORTKEY_GATEWAY_URL ="https://api.portkey.ai/v1/chat/completions"PORTKEY_HEADERS ={'Authorization':'Bearer ANYSCALE_KEY','Content-Type':'application/json',# **************************************'x-portkey-api-key':'PORTKEY_API_KEY',# Get from https://app.portkey.ai/,'x-portkey-provider':'anyscale'# Tell Portkey that the request is for Anyscale# **************************************}DATA ={"messages": [{"role":"user","content":"What happens when you mix red & yellow?"}],"model":"mistralai/Mistral-7B-Instruct-v0.1"}response = requests.post(PORTKEY_GATEWAY_URL, headers=PORTKEY_HEADERS, json=DATA)print(response.text)
Once you start logging your requests and their feedback with Portkey, it becomes very easy to 1️) Curate & create data for fine-tuning, 2) Schedule fine-tuning jobs, and 3) Use the fine-tuned models!
Fine-tuning is currently enabled for select orgs - please request access on Portkey Discord and we'll get back to you ASAP.
Conclusion
Integrating Portkey with Anyscale helps you build resilient LLM apps from the get-go. With features like semantic caching, observability, load balancing, feedback, and fallbacks, you can ensure optimal performance and continuous improvement.