Model Management
Discover and query available AI models and their capabilities across all providers.
List Models
Returns the complete list of currently enabled AI models with their capabilities and limits.
Endpoint: https://apis.threatwinds.com/api/ai/v1/models
Method: GET
Parameters
Headers
| Header | Type | Required | Description |
|---|---|---|---|
| Authorization | string | Optional* | Bearer token for session authentication |
| api-key | string | Optional* | API key for key-based authentication |
| api-secret | string | Optional* | API secret for key-based authentication |
Note: You must use either Authorization header OR API key/secret combination.
Request
curl -X 'GET' \
'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'accept: application/json' \
-H 'Authorization: Bearer <token>'
Response
Success Response (200 OK)
{
"object": "list",
"data": [
{
"id": "gpt-5",
"object": "model",
"name": "GPT-5",
"provider": "openai",
"owned_by": "OpenAI",
"created": 2024,
"capabilities": [
"chat",
"tools-use",
"reasoning",
"code-generation",
"image"
],
"limits": {
"max_input_tokens": 1050000,
"max_total_tokens": 1280000
},
"params": {
"temperature": 1.0,
"top_p": 1.0,
"seed": null,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"max_completion_tokens": 128000
}
}
]
}
Response Schema
| Field | Type | Description |
|---|---|---|
| object | string | Response type, always “list” |
| data | array | Array of model objects |
| data[].id | string | Unique model identifier |
| data[].name | string | Human-readable model name |
| data[].provider | string | Provider ID: openai, gemini, claude, threatwinds (collapses all self-hosted backends) |
| data[].owned_by | string | Organization or developer |
| data[].created | integer | Year of release |
| data[].capabilities | array | Supported features |
| data[].limits | object | Token constraints |
| limits.max_input_tokens | integer | Maximum input context size |
| limits.max_total_tokens | integer | Combined input+output limit |
| params | object | Model parameters and validation ranges (optional) |
Capability Values
| Value | Description |
|---|---|
| chat | Conversational text interaction |
| text-generation | General-purpose text generation |
| code-generation | Programming code creation and completion |
| tools-use | Function calling capability |
| reasoning | Step-by-step reasoning support |
| image | Vision/image understanding |
| transcription | Speech-to-text |
| speech | Text-to-speech synthesis |
| embeddings | Vector embedding generation |
| vision-embeddings | Multimodal embeddings |
Error Codes
| Status Code | Description | Cause |
|---|---|---|
| 200 | OK | Success |
| 400 | Bad Request | Invalid parameters |
| 401 | Unauthorized | Missing/invalid credentials |
| 403 | Forbidden | Insufficient permissions |
Get Model Details
Retrieve detailed information about a specific model by ID.
Endpoint: https://apis.threatwinds.com/api/ai/v1/models/{id}
Method: GET
Path Parameters
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
| id | string | Yes | Model identifier | gpt-5 |
Request
curl -X 'GET' \
'https://apis.threatwinds.com/api/ai/v1/models/gpt-5' \
-H 'Authorization: Bearer <token>'
Response
{
"id": "silas-1.0",
"object": "model",
"name": "Silas 1.0",
"provider": "threatwinds",
"owned_by": "ThreatWinds",
"created": 2025,
"capabilities": ["chat", "tools-use", "text-generation", "code-generation"],
"limits": {
"max_input_tokens": 121072,
"max_total_tokens": 131072
},
"params": {
"temperature": 0.7,
"top_p": 0.8,
"seed": null,
"max_completion_tokens": 10000
}
}
Error Codes
| Status Code | Description | Cause |
|---|---|---|
| 200 | OK | Model details returned successfully |
| 404 | Not Found | The specified model ID does not exist in any provider configuration |
Filtering Models Programmatically
Use jq or similar tools to filter the /models response for your needs.
Filter by Provider
Get all models from a specific provider:
# OpenAI models only
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.provider == "openai")'
# Self-hosted ThreatWinds models
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.provider == "threatwinds")'
Filter by Capability
Find models supporting specific features:
# Models with vision/image capability
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.capabilities | contains(["image"]))'
# Models supporting tool calls
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.capabilities | contains(["tools-use"]))'
# Embedding models only
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.capabilities | contains(["embeddings"]))'
# Audio transcription models
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.capabilities | contains(["transcription"]))'
Filter by Context Window
Models with large context windows:
# Models with > 100K input tokens
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data[] | select(.limits.max_input_tokens > 100000)'
# Sort by context size (largest first)
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq '.data | sort_by(-.limits.max_input_tokens) | .[0:5]'
Extract Useful Fields
Create clean lists for UI or CLI usage:
# Just model IDs
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq -r '.data[].id'
# Display name and provider
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq -r '.data[] | "\(.name) (\(.provider))"'
# Chat-capable models with context info
curl -s 'https://apis.threatwinds.com/api/ai/v1/models' \
-H 'Authorization: Bearer <token>' | \
jq -r '.data[] | select(.capabilities | contains(["chat"])) | "\(.id): \(.limits.max_input_tokens) tokens"'
Model Discovery Best Practices
-
Query at Runtime: Call
/modelsduring application initialization to get current catalog -
Cache Appropriately: Cache model listings but refresh periodically as providers add/deprecate models
-
Check Capabilities First: Always verify
.capabilitiesarray before using a model for specific tasks -
Respect Token Limits: Read
.limitsto ensure your use case fits within model constraints -
Handle Unknown Errors: Some providers may have temporary outages; implement retry logic
-
Provider-Specific Behavior: Check
params_limitsfor valid parameter ranges (e.g., temperature bounds)
Provider Overview
| Provider | Type | Typical Use Cases |
|---|---|---|
| openai | External API | General chat, complex reasoning, vision tasks |
| gemini | External API | Long-context processing, cost-effective inference |
| claude | External API | Anthropic Claude Opus/Sonnet for reasoning and coding (no token counting) |
| threatwinds | Self-hosted | All ThreatWinds-hosted models: chat (Silas and open-weight chat models), text and vision embeddings, and audio (stt/tts). Use the owned_by field to identify the original model maintainer. |
Note: Active providers vary based on deployment configuration. Always check
/modelsfor real-time availability.