Chat · OpenAI
Chat Completions
Generate conversational responses using OpenAI language models. Supports streaming, function calling, and multimodal inputs.
/v1/chat/completionsSupported Models
| Model | Provider |
|---|---|
gpt-4o | OpenAI |
gpt-4o-mini | OpenAI |
gpt-4.1 | OpenAI |
gpt-4.1-mini | OpenAI |
gpt-4.1-nano | OpenAI |
o4-mini | OpenAI |
Request
Body Parameters
modelstringrequiredModel ID to use for completion
messagesarrayrequiredArray of message objects with role and content
temperaturenumberSampling temperature (0-2)
Default: 1
max_tokensintegerMaximum tokens to generate
top_pnumberNucleus sampling parameter
Default: 1
streambooleanEnable server-sent events streaming
Default: false
stopstring | string[]Stop sequences to halt generation
frequency_penaltynumberFrequency penalty (-2 to 2)
Default: 0
presence_penaltynumberPresence penalty (-2 to 2)
Default: 0
toolsarrayList of tools (functions) the model can call
tool_choicestring | objectControl tool selection behavior
Options: auto, none, required
response_formatobjectForce output format
Options: text, json_object
Message Object
Fields
rolestringrequiredMessage role
Options: system, user, assistant, tool
contentstring | arrayrequiredMessage text or multimodal content array
namestringOptional participant name
tool_callsarrayTool calls made by the assistant
tool_call_idstringID of tool call this message responds to
curl https://api.metriqual.com/v1/chat/completions \
-H "Authorization: Bearer mql_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is quantum computing?"}
],
"temperature": 0.7,
"max_tokens": 500
}'import Metriqual from '@metriqual/sdk';
const mql = new Metriqual({ apiKey: 'mql_your_key' });
const response = await mql.chat.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is quantum computing?' }
],
temperature: 0.7,
max_tokens: 500
});
console.log(response.choices[0].message.content);from metriqual import MQL
mql = MQL(api_key="mql_your_key")
response = mql.chat.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is quantum computing?"}
],
temperature=0.7,
max_tokens=500
)
print(response["choices"][0]["message"]["content"])Response
Response Fields
idstringUnique completion ID
objectstringAlways "chat.completion"
createdintegerUnix timestamp
modelstringModel used for completion
choicesarrayArray of completion choices
usageobjectToken usage statistics
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1705320000,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing is a type of computation..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 150,
"total_tokens": 175
}
}Streaming
Set stream: true to receive incremental responses via Server-Sent Events. Each chunk contains a delta with partial content.
curl https://api.metriqual.com/v1/chat/completions \
-H "Authorization: Bearer mql_your_key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Tell me a story"}], "stream": true}'// Async iterator
for await (const chunk of mql.chat.stream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }]
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Or collect the full response
const { text } = await mql.chat.streamToCompletion({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }]
});
console.log(text);# Iterator
for chunk in mql.chat.stream(
model="gpt-4o",
messages=[{"role": "user", "content": "Tell me a story"}]
):
delta = chunk["choices"][0]["delta"]
print(delta.get("content", ""), end="")
# Or collect the full response
result = mql.chat.stream_to_completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Tell me a story"}]
)
print(result["text"])Simple Completion Helper
Use the complete() method for a one-line call that returns just the text response.
const text = await mql.chat.complete(
[{ role: 'user', content: 'Explain gravity in one sentence' }],
{ model: 'gpt-4o-mini' }
);
console.log(text); // "Gravity is the force..."text = mql.chat.complete(
[{"role": "user", "content": "Explain gravity in one sentence"}],
model="gpt-4o-mini"
)
print(text) # "Gravity is the force..."SDK Method Reference
| Method | TypeScript | Python |
|---|---|---|
| Create completion | mql.chat.create(request) | mql.chat.create(**kwargs) |
| Stream chunks | mql.chat.stream(request) | mql.chat.stream(**kwargs) |
| Stream to text | mql.chat.streamToCompletion(request) | mql.chat.stream_to_completion(**kwargs) |
| Simple text | mql.chat.complete(messages, opts?) | mql.chat.complete(messages, **kwargs) |