OpenAI-compatible chat completions endpoint supporting all text models
Documentation Index
Fetch the complete documentation index at: https://docs.toapis.com/llms.txt
Use this file to discover all available pages before exploring further.
model parameter — see Model List for all supported models"claude-sonnet-4-6", "gpt-5", "qwen3-max"true: Stream tokens incrementallyfalse: Return the complete response at once0 to 20 to 1. It is not recommended to modify both temperature and top_p at the same timechat.completionchoices[].message.role: Message role, always assistantchoices[].message.content: Generated text contentchoices[].finish_reason: Stop reason — stop / length / content_filterchoices[].index: Result indexusage.prompt_tokens: Input token countusage.completion_tokens: Output token countusage.total_tokens: Total token count