Embeddings

Create Embeddings

Generate vector embeddings from text for semantic search, clustering, and similarity tasks.

POST/v1/embeddings

Supported Models

ModelProvider
text-embedding-3-smallOpenAI
text-embedding-3-largeOpenAI
text-embedding-ada-002OpenAI

Request

Body Parameters

modelstringrequired

Embedding model ID

inputstring | string[]required

Text to embed. Can be a single string or array of strings.

encoding_formatstring

Output encoding format

Default: float

Options: float, base64

dimensionsinteger

Desired output dimensions (for text-embedding-3 models)

cURL
curl https://api.metriqual.com/v1/embeddings \
  -H "Authorization: Bearer mql_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "The quick brown fox jumps over the lazy dog"
  }'
SDK
const result = await mql.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'The quick brown fox jumps over the lazy dog'
});

console.log(result.data[0].embedding);
// [0.0023064255, -0.009327292, ...]
Batch Embeddings
const result = await mql.embeddings.create({
  model: 'text-embedding-3-small',
  input: [
    'First document text',
    'Second document text',
    'Third document text'
  ]
});

// result.data contains one embedding per input
result.data.forEach((item, i) => {
  console.log(`Doc ${i}: ${item.embedding.length} dimensions`);
});
Python SDK
result = mql.embeddings.create(
    input="The quick brown fox jumps over the lazy dog",
    model="text-embedding-3-small",
)
print(result["data"][0]["embedding"][:5])

# Batch embedding
result = mql.embeddings.create(
    input=["First doc", "Second doc", "Third doc"],
    model="text-embedding-3-small",
)
for item in result["data"]:
    print(f"Doc {item['index']}: {len(item['embedding'])} dims")

# With custom dimensions
result = mql.embeddings.create_with_dimensions(
    input="Hello",
    model="text-embedding-3-large",
    dimensions=256,
)

Response

200
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023064255, -0.009327292, ...]
    }
  ],
  "model": "text-embedding-3-small",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}