Member-only story

Outlines: Make LLM structured outputs controllable and improve the stability of LLM applications

ully
3 min readJul 13, 2024

When developing applications for LLM, one of the big advantages of LLM over traditional interface services is the ability to generate natural language output that is more human-friendly, but this is an obstacle for system integration, where interactions between systems are usually structured.This requires us to have the LLM output in some format, such as json, so that it can be processed later.The usual way to do this is to provide formatting requirements (and ideally examples) in the prompt, but this is not 100% effective, and thus affects the stability of the application.

Provide 3 suggestions for specific places to go to in Seattle on a rainy day. Respond in the form of JSON. The JSON should have the following forma

[
{ "venue": "...", "description": "..." },
{ "venue": "...", "description": "..." }
]

Faced with such possible edge cases, the author has previously introduced Microsoft related technologies such as Guidance, typechat.

  • choice
import outlines

model = outlines.models.transformers("mistralai/Mistral-7B-Instruct-v0.2")

prompt = """You are a sentiment-labelling assistant.
Is the following review positive or negative?

Review: This restaurant is just awesome!
"""


generator =…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

ully
ully

No responses yet

Write a response