Comment on page
Optimise generation quality
Large Language Models (LLMs) have become increasingly powerful tools, capable of generating content with human-like fluency. However, ensuring that the generated content adheres to specific quality standards or conforms to a desired format can be challenging. This inconsistency can pose a significant challenge, especially for businesses that rely on these models for customer interaction or content generation.
Portkey provides several advanced features and tools to ensure the quality and consistency of the output generated by LLMs.
Portkey can apply schema validation to the output of an LLM. With this feature, the generated text is checked against predefined patterns or formats, ensuring the output adheres to a certain standard or structure. This is particularly useful for use-cases where specific formatting or data structures are crucial.
Another powerful tool at your disposal is the Rule Engine. This tool allows you to define specific rules that the LLM output must comply with. The rules can be defined based on the context of your application, ensuring that the output meets the required criteria, thus improving the quality and usefulness of the generated text.
Portkey also supports the implementation of evaluation models. In this setup, the output from one LLM is evaluated by another, acting as a sort of 'double-check' on the initial model's output. This can be a powerful tool for improving the quality and accuracy of LLM outputs, especially in complex or high-stakes use-cases.
Please note: that schema validation, rule engines, and the Evals framework are currently custom enterprise features. If you're interested in deploying these advanced quality control features, reach out to our support team for a discussion.