Skip to main content
POST
/
v1
/
messages
curl --request POST \
  --url https://toapis.com/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Hello, tell me about yourself"
      }
    ]
  }'
{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! I'm Claude, an AI assistant made by Anthropic. I can help with questions, analysis, coding, writing, and more. How can I help you today?"
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 12,
    "output_tokens": 35
  }
}
  • Native Anthropic Messages API format
  • Drop-in compatible with the official Anthropic SDK (Python / JavaScript) — just change base_url
  • Supports streaming (SSE)
  • Supports multi-turn conversations, system prompts, vision input, and tool use
If you are already using the OpenAI SDK, use the OpenAI-compatible endpoint instead. If you are using the Anthropic SDK or Claude Code, this endpoint is recommended.

Authorizations

Authorization
string
Bearer token authentication for direct HTTP calls
Authorization: Bearer YOUR_API_KEY
x-api-key
string
API key authentication, compatible with the Anthropic SDK
x-api-key: YOUR_API_KEY
anthropic-version
string
default:"2023-06-01"
Anthropic API version. The Anthropic SDK sets this automatically.Recommended: 2023-06-01

Body

model
string
required
Model nameAll Claude models are supported, for example:
  • claude-opus-4-6
  • claude-sonnet-4-6
  • claude-haiku-4-5
messages
object[]
required
Conversation messages in chronological order. Only user and assistant roles are allowed here — use the top-level system field for system prompts.
max_tokens
integer
required
Maximum number of tokens to generate
  • Claude Sonnet 4-6: up to 64000
  • Claude Opus 4-6: up to 32000
system
string | object[]
System prompt, set at the top level (not inside messages)Accepts a plain string or an array of content blocks.
stream
boolean
default:false
Enable streaming output (Server-Sent Events)
  • true: tokens streamed incrementally following the Anthropic SSE event format
  • false: full response returned at once
temperature
number
default:1
Sampling temperature controlling output randomnessRange: 01
top_p
number
Nucleus sampling thresholdRange: 01. Avoid setting both temperature and top_p simultaneously.
stop_sequences
string[]
Stop sequences — generation stops when any of these strings is produced

Response

id
string
Unique identifier for the request, prefixed with msg_
type
string
Object type, always message
role
string
Response role, always assistant
content
object[]
List of generated content blocks
  • content[].type: content type, typically text
  • content[].text: generated text
model
string
The model that handled the request
stop_reason
string
Reason generation stopped
  • end_turn: model finished naturally
  • max_tokens: max_tokens limit reached
  • stop_sequence: a stop sequence was triggered
usage
object
Token usage for this request
  • usage.input_tokens: input token count
  • usage.output_tokens: output token count
curl --request POST \
  --url https://toapis.com/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Hello, tell me about yourself"
      }
    ]
  }'
{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! I'm Claude, an AI assistant made by Anthropic. I can help with questions, analysis, coding, writing, and more. How can I help you today?"
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 12,
    "output_tokens": 35
  }
}