Audio · OpenAI
Audio Translation
Translate audio in any language to English text using OpenAI's Whisper model.
POST
/v1/audio/translationsSupported Models
| Model | Provider |
|---|---|
whisper-1 | OpenAI |
The translation endpoint always translates to English. For speech-to-text in the original language, use the Transcription endpoint instead.
Request
Multipart Form Parameters
filefileformrequiredAudio file (mp3, mp4, mpeg, mpga, m4a, wav, webm — max 25 MB)
modelstringformrequiredModel ID — use "whisper-1"
promptstringformHint text to guide translation style
response_formatstringformOutput format
Default: json
Options: json, text, srt, verbose_json, vtt
temperaturenumberformSampling temperature (0-1)
Default: 0
cURL
curl https://api.metriqual.com/v1/audio/translations \
-H "Authorization: Bearer mql_your_key" \
-F file=@german_audio.mp3 \
-F model=whisper-1TypeScript SDK
const result = await mql.audio.translate({
file: audioFile,
model: 'whisper-1',
});
console.log(result.text); // English translationPython SDK
with open("german_audio.mp3", "rb") as f:
result = mql.audio.translate(file=f, model="whisper-1")
print(result["text"]) # English translationResponse
Returns the translated English text. Use verbose_json for timestamps and segments.
200
json (default)
{
"text": "Hello, how are you? I am doing well, thank you."
}200
verbose_json
{
"text": "Hello, how are you?",
"language": "german",
"duration": 4.21,
"segments": [
{
"id": 0,
"start": 0.0,
"end": 4.21,
"text": " Hello, how are you?",
"temperature": 0
}
]
}