Skip to content

feat(zai): update GLM-5 model specs with correct pricing and token limits#11479

Draft
roomote[bot] wants to merge 1 commit intomainfrom
feature/glm-5-correct-specs
Draft

feat(zai): update GLM-5 model specs with correct pricing and token limits#11479
roomote[bot] wants to merge 1 commit intomainfrom
feature/glm-5-correct-specs

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Feb 15, 2026

Related GitHub Issue

Closes: #11438

Description

This PR attempts to address Issue #11438 by updating the GLM-5 model entries in the Z.ai provider with correct specs from the official docs, as provided by contributors in the issue comments.

The previous PR #11443 had merge conflicts, so this is a fresh implementation on current main.

International (z.ai) GLM-5 changes:

  • maxTokens: 16,384 -> 128,000 (128k max output per docs)
  • contextWindow: 202,752 -> 200,000 (200k per docs)
  • inputPrice: $0.60 -> $1.00 per 1M tokens
  • outputPrice: $2.20 -> $3.20 per 1M tokens
  • cacheReadsPrice: $0.11 -> $0.20 per 1M tokens

Mainland China (bigmodel.cn) GLM-5 changes:

  • maxTokens: 16,384 -> 128,000
  • contextWindow: 202,752 -> 200,000
  • Pricing scaled proportionally using the same international/mainland ratio as GLM-4.7

Features (already correctly configured, unchanged):

  • Thinking/reasoning mode: Yes (supportsReasoningEffort)
  • Prompt caching: Yes
  • Image support: No (text only)

Feedback and guidance are welcome.

Test Procedure

  • Ran npx vitest run api/providers/__tests__/zai.spec.ts from src/ directory -- all 33 tests pass
  • Full lint passed via pre-commit hooks (turbo lint, 14/14 packages)
  • Full type-check passed via pre-push hooks (turbo check-types, 14/14 packages)

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Documentation Updates

  • No documentation updates are required.

Additional Notes

The GLM-5 model entry already existed in the codebase but with placeholder values copied from GLM-4.7. This PR corrects those values using the specs provided by @damianar1984 in the issue comments, sourced from the official z.ai documentation. This replaces the conflicting PR #11443 with a clean implementation on current main.

@roomote
Copy link
Contributor Author

roomote bot commented Feb 15, 2026

Rooviewer Clock   See task

Review complete -- no issues found. International GLM-5 pricing and token limits match the official docs. Mainland China pricing is proportionally scaled from GLM-4.7 ratios, which is consistent with the existing pattern. All 33 tests pass.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@damianar1984
Copy link

damianar1984 commented Feb 15, 2026

something is wrong with this bot roomote. 2 days for such an easy adjustment and still not promoted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] please add glm-5 on z.ai

2 participants