Skip to content

feat: add Avian as a native LLM provider#4631

Open
avianion wants to merge 2 commits intocrewAIInc:mainfrom
avianion:feat/add-avian-provider
Open

feat: add Avian as a native LLM provider#4631
avianion wants to merge 2 commits intocrewAIInc:mainfrom
avianion:feat/add-avian-provider

Conversation

@avianion
Copy link

@avianion avianion commented Feb 27, 2026

Summary

Adds Avian as a native LLM provider in CrewAI. Avian provides an OpenAI-compatible API for accessing high-performance language models at competitive prices, with no additional dependencies required.

What is Avian?

Avian is an LLM API provider offering access to a curated set of high-performance models:

Model Context Window Max Output Input / Output (per 1M tokens)
deepseek/deepseek-v3.2 164K tokens 65K tokens $0.26 / $0.38
moonshotai/kimi-k2.5 131K tokens 8K tokens $0.45 / $2.20
z-ai/glm-5 131K tokens 16K tokens $0.30 / $2.55
minimax/minimax-m2.5 1M tokens 1M tokens $0.30 / $1.10

Changes

  • Provider class: AvianCompletion in lib/crewai/src/crewai/llms/providers/avian/completion.py — thin subclass of OpenAICompletion that defaults to AVIAN_API_KEY and https://api.avian.io/v1
  • LLM routing: Added avian to SUPPORTED_NATIVE_PROVIDERS, provider_mapping, _get_native_provider(), and _validate_model_in_constants() in llm.py
  • Constants: Added AVIAN_MODELS to llms/constants.py and Avian entries to CLI constants.py (ENV_VARS, PROVIDERS, MODELS)
  • Tests: 13 unit tests covering provider routing, authentication, configuration, model validation, and parameter pass-through
  • Documentation: Added Avian section to docs/en/concepts/llms.mdx and docs/en/learn/llm-connections.mdx

Usage

from crewai import LLM

# Set AVIAN_API_KEY environment variable, then:
llm = LLM(model="avian/deepseek/deepseek-v3.2", temperature=0.7)

Design decisions

  • Subclasses OpenAICompletion: Since Avian's API is fully OpenAI-compatible (chat completions, streaming, function calling), the implementation reuses the existing OpenAI provider with minimal overrides — just the default API key env var and base URL.
  • No new dependencies: Works with the existing openai SDK that CrewAI already depends on.
  • Follows existing patterns: Registration in all the same places as other native providers (OpenAI, Anthropic, Gemini, Azure, Bedrock).

Test plan

  • All 13 Avian unit tests pass (pytest tests/llms/avian/test_avian.py)
  • Existing OpenAI provider tests unaffected
  • CI pipeline passes

Note

Medium Risk
Moderate risk because it changes core LLM provider routing/validation logic and adds a new native provider path, which could affect model/provider selection. Changes are largely additive and covered by new unit tests.

Overview
Adds native Avian provider support by introducing AvianCompletion (an OpenAICompletion subclass) that targets Avian’s default base URL, requires AVIAN_API_KEY, and reports correct large context-window sizes.

Updates LLM routing/validation to recognize the avian/ prefix and provider="avian", adds Avian models to constants and CLI provider/model selections, and expands docs to include Avian setup/examples.

Includes a dedicated Avian test suite covering routing, auth/env/base URL overrides, model validation, and context window sizing.

Written by Cursor Bugbot for commit 3aa3f45. This will update automatically on new commits. Configure here.

@avianion
Copy link
Author

cc @joaomdmoura @lorenzejay @greysonlalonde for review

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Free Tier Details

Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

@avianion
Copy link
Author

Good catch — fixed in the latest push. Added get_context_window_size() override with correct context windows for all Avian models.

@avianion
Copy link
Author

Fixed in the latest commit — added AVIAN_CONTEXT_WINDOWS dict and overrode get_context_window_size() with correct values (164K-1M) for all Avian models. Also added 6 tests covering the context window logic.

Add Avian (https://avian.io) as a native LLM provider in CrewAI.
Avian provides an OpenAI-compatible API for accessing high-performance
language models at competitive prices.

Changes:
- Add AvianCompletion provider class (subclasses OpenAICompletion)
- Register Avian in SUPPORTED_NATIVE_PROVIDERS and provider routing
- Add Avian models to constants (deepseek-v3.2, kimi-k2.5, glm-5, minimax-m2.5)
- Add Avian to CLI provider setup (ENV_VARS, PROVIDERS, MODELS)
- Add 13 unit tests covering routing, auth, config, and model validation
- Add documentation in LLM concepts and connections guides

Usage:
  export AVIAN_API_KEY=your-key
  llm = LLM(model="avian/deepseek/deepseek-v3.2")
The inherited OpenAICompletion.get_context_window_size() only recognizes
GPT-prefixed models and returns ~6,963 tokens for Avian models instead
of their actual context windows. Add provider-specific override with
correct sizes: deepseek-v3.2 (164K), kimi-k2.5 (131K), glm-5 (131K),
minimax-m2.5 (1M).
@avianion avianion force-pushed the feat/add-avian-provider branch from eb12a2b to 3aa3f45 Compare February 27, 2026 21:08
@avianion
Copy link
Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. Would appreciate a look when you get a chance! cc @joaomdmoura @greysonlalonde

@avianion
Copy link
Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. All feedback has been addressed. Would appreciate a look when you get a chance! cc @lorenzejay @greysonlalonde

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant