Update model configs [2026-02-20T00:00:27.403Z]#103
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 042d61f027
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| max_input_tokens: 128000 | ||
| max_output_tokens: 16384 | ||
| mode: chat | ||
| model: gpt-4o |
There was a problem hiding this comment.
Remove duplicated azure-openai provider namespace
This commit adds a full providers/azure-openai tree while the existing providers/azure-open-ai tree is still present, and both now contain the same 78 model IDs (for example gpt-4o here), so scripts/build-unified-json.ts will export two provider names for the same Azure OpenAI models instead of one canonical namespace. That duplication creates conflicting provider IDs in downstream consumers and makes model discovery/filtering unstable because clients can see two entries for the same model under different provider slugs.
Useful? React with 👍 / 👎.
| costs: | ||
| - input_cost_per_token: 2.e-7 | ||
| output_cost_per_token: 2.e-7 | ||
| region: "*" | ||
| features: |
There was a problem hiding this comment.
Restore Together Turbo pricing entries
The pricing block was removed from this model (and similarly from Qwen2.5-7B-Instruct-Turbo, Llama-3.2-3B-Instruct-Turbo, and Mistral-Small-24B-Instruct-2501) without adding replacement cost fields, so these active chat configs now lose input_cost_per_token/output_cost_per_token metadata that existed before this commit. Any downstream cost estimation or budget guardrails using this registry will regress to unknown/zero pricing for these models.
Useful? React with 👍 / 👎.
Summary
This PR updates the AI model configurations with the latest data.
Generated automatically by the model-sync job.