curl --request POST \
--url https://api.example.com/v1/chat/completions \
--header 'Authorization: <authorization>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "<string>",
"messages": [
{}
],
"tools": [
{}
],
"tool_choice": {},
"parallel_tool_calls": true,
"stream": true,
"stream_options": {},
"temperature": 123,
"top_p": 123,
"max_tokens": 123,
"max_completion_tokens": 123,
"response_format": {},
"stop": {},
"presence_penalty": 123,
"frequency_penalty": 123,
"logprobs": true,
"top_logprobs": 123,
"seed": 123,
"n": 123
}
'{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{}
],
"usage": {}
}Create a chat completion using OpenAI-compatible format
curl --request POST \
--url https://api.example.com/v1/chat/completions \
--header 'Authorization: <authorization>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "<string>",
"messages": [
{}
],
"tools": [
{}
],
"tool_choice": {},
"parallel_tool_calls": true,
"stream": true,
"stream_options": {},
"temperature": 123,
"top_p": 123,
"max_tokens": 123,
"max_completion_tokens": 123,
"response_format": {},
"stop": {},
"presence_penalty": 123,
"frequency_penalty": 123,
"logprobs": true,
"top_logprobs": 123,
"seed": 123,
"n": 123
}
'{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{}
],
"usage": {}
}Documentation Index
Fetch the complete documentation index at: https://mintlify.com/Soju06/codex-lb/llms.txt
Use this file to discover all available pages before exploring further.
/v1/chat/completions endpoint provides full OpenAI Chat Completions API compatibility. It accepts chat-formatted messages and maps them internally to the Responses API format while preserving streaming behavior and tool calling capabilities.
Bearer YOUR_API_KEY/v1/models endpoint.Example: "gpt-4.1", "gpt-5.2"role (string, required): One of "system", "developer", "user", "assistant", or "tool"content (string | array): Message content. For system/developer roles, must be text-only.tool_calls (array, optional): For assistant messages, array of tool call objectstool_call_id (string, required for tool role): ID of the tool call this message responds totype (string): "function" or "web_search"function (object): For function tools, contains name, description, and parametersfunction: Custom function callsweb_search or web_search_preview: Web search capabilityfile_search, code_interpreter, computer_use, computer_use_preview, image_generation"none": Model will not call tools"auto": Model decides whether to call tools"required": Model must call at least one tool{"type": "function", "function": {"name": "tool_name"}}: Force specific tooltrue: Returns text/event-stream with chat.completion.chunk objectsfalse: Returns a single chat.completion objectinclude_usage (boolean): Include token usage in final chunkinclude_obfuscation (boolean): Include obfuscation data in streammax_completion_tokens.{"type": "text"}: Plain text (default){"type": "json_object"}: Valid JSON object{"type": "json_schema", "json_schema": {...}}: JSON matching provided schemajson_schema type:json_schema.name (string): Schema name, 1-64 chars, alphanumeric/underscore/hyphenjson_schema.schema (object): JSON Schema definitionjson_schema.strict (boolean): Enable strict schema adherencelogprobs: true).stream is false or omitted, returns a chat.completion object:
"chat.completion".index (integer): Choice index (always 0)message (object): The assistant’s message
role (string): Always "assistant"content (string | null): Text content of the messagerefusal (string | null): Refusal message if model declinedtool_calls (array | null): Tool calls made by the modelfinish_reason (string): Why generation stopped
"stop": Natural completion"length": Max tokens reached"tool_calls": Model called tools"content_filter": Content filteredprompt_tokens (integer): Tokens in the promptcompletion_tokens (integer): Tokens in the completiontotal_tokens (integer): Total tokens usedprompt_tokens_details (object | null):
cached_tokens (integer): Cached prompt tokenscompletion_tokens_details (object | null):
reasoning_tokens (integer): Tokens used for reasoningstream is true, returns text/event-stream with chat.completion.chunk objects:
"chat.completion.chunk".index (integer): Always 0delta (object): Incremental content
role (string | null): Role (only in first chunk)content (string | null): Content deltarefusal (string | null): Refusal deltatool_calls (array | null): Tool call deltasfinish_reason (string | null): Reason when completestream_options.include_usage is true).curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Write a haiku about coding"}
],
"stream": true
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [
{"role": "user", "content": "Explain quantum computing"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "What is the weather in San Francisco?"}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "What are the latest news about AI?"}
],
"tools": [
{"type": "web_search"}
]
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Generate a person profile"}
],
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "person_profile",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"city": {"type": "string"}
},
"required": ["name", "age"]
},
"strict": true
}
}
}'
curl https://api.example.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "system", "content": "You are a helpful math tutor."},
{"role": "user", "content": "What is 25 * 4?"}
]
}'
400 with invalid_request_error{"type": "text", "text": "..."}{"type": "image_url", "image_url": {"url": "..."}}
{"type": "file", "file": {...}}
file_id is not supported and will return errorinput_audio type returns 400 errorcontent (text) and/or tool_callsid and function with nametool_call_id matching a previous assistant tool call{
"error": {
"message": "Error description",
"type": "invalid_request_error",
"code": "error_code",
"param": "field_name"
}
}
invalid_request_error: Invalid request parametersmodel_not_allowed: API key lacks access to requested modelno_accounts: No upstream accounts availableupstream_error: Upstream service errordata: [DONE].
allowed_models configured, only those models can be used. Requests for other models return:
{
"error": {
"message": "This API key does not have access to model 'gpt-5.2'",
"type": "invalid_request_error",
"code": "model_not_allowed"
}
}
/v1/models.