Skip to content

Comments

Update model configs [2026-02-20T00:00:27.403Z]#103

Open
nikp1172 wants to merge 2 commits intomainfrom
model-sync-update-2026-02-20T00-00-27-450Z
Open

Update model configs [2026-02-20T00:00:27.403Z]#103
nikp1172 wants to merge 2 commits intomainfrom
model-sync-update-2026-02-20T00-00-27-450Z

Conversation

@nikp1172
Copy link
Contributor

Summary

This PR updates the AI model configurations with the latest data.

  • Models exported: 1336
  • Files in commit: 1351
  • Generated: 2026-02-20T00:00:27.403Z

Generated automatically by the model-sync job.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 042d61f027

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

max_input_tokens: 128000
max_output_tokens: 16384
mode: chat
model: gpt-4o

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Remove duplicated azure-openai provider namespace

This commit adds a full providers/azure-openai tree while the existing providers/azure-open-ai tree is still present, and both now contain the same 78 model IDs (for example gpt-4o here), so scripts/build-unified-json.ts will export two provider names for the same Azure OpenAI models instead of one canonical namespace. That duplication creates conflicting provider IDs in downstream consumers and makes model discovery/filtering unstable because clients can see two entries for the same model under different provider slugs.

Useful? React with 👍 / 👎.

costs:
- input_cost_per_token: 2.e-7
output_cost_per_token: 2.e-7
region: "*"
features:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Restore Together Turbo pricing entries

The pricing block was removed from this model (and similarly from Qwen2.5-7B-Instruct-Turbo, Llama-3.2-3B-Instruct-Turbo, and Mistral-Small-24B-Instruct-2501) without adding replacement cost fields, so these active chat configs now lose input_cost_per_token/output_cost_per_token metadata that existed before this commit. Any downstream cost estimation or budget guardrails using this registry will regress to unknown/zero pricing for these models.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant