feat: add Avian as a native LLM provider#4631
feat: add Avian as a native LLM provider#4631avianion wants to merge 2 commits intocrewAIInc:mainfrom
Conversation
|
cc @joaomdmoura @lorenzejay @greysonlalonde for review |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Free Tier Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
|
Good catch — fixed in the latest push. Added |
|
Fixed in the latest commit — added |
Add Avian (https://avian.io) as a native LLM provider in CrewAI. Avian provides an OpenAI-compatible API for accessing high-performance language models at competitive prices. Changes: - Add AvianCompletion provider class (subclasses OpenAICompletion) - Register Avian in SUPPORTED_NATIVE_PROVIDERS and provider routing - Add Avian models to constants (deepseek-v3.2, kimi-k2.5, glm-5, minimax-m2.5) - Add Avian to CLI provider setup (ENV_VARS, PROVIDERS, MODELS) - Add 13 unit tests covering routing, auth, config, and model validation - Add documentation in LLM concepts and connections guides Usage: export AVIAN_API_KEY=your-key llm = LLM(model="avian/deepseek/deepseek-v3.2")
The inherited OpenAICompletion.get_context_window_size() only recognizes GPT-prefixed models and returns ~6,963 tokens for Avian models instead of their actual context windows. Add provider-specific override with correct sizes: deepseek-v3.2 (164K), kimi-k2.5 (131K), glm-5 (131K), minimax-m2.5 (1M).
eb12a2b to
3aa3f45
Compare
|
Friendly follow-up — this PR is still active and ready for review. Would appreciate a look when you get a chance! cc @joaomdmoura @greysonlalonde |
|
Friendly follow-up — this PR is still active and ready for review. All feedback has been addressed. Would appreciate a look when you get a chance! cc @lorenzejay @greysonlalonde |
Summary
Adds Avian as a native LLM provider in CrewAI. Avian provides an OpenAI-compatible API for accessing high-performance language models at competitive prices, with no additional dependencies required.
What is Avian?
Avian is an LLM API provider offering access to a curated set of high-performance models:
Changes
AvianCompletioninlib/crewai/src/crewai/llms/providers/avian/completion.py— thin subclass ofOpenAICompletionthat defaults toAVIAN_API_KEYandhttps://api.avian.io/v1aviantoSUPPORTED_NATIVE_PROVIDERS,provider_mapping,_get_native_provider(), and_validate_model_in_constants()inllm.pyAVIAN_MODELStollms/constants.pyand Avian entries to CLIconstants.py(ENV_VARS,PROVIDERS,MODELS)docs/en/concepts/llms.mdxanddocs/en/learn/llm-connections.mdxUsage
Design decisions
OpenAICompletion: Since Avian's API is fully OpenAI-compatible (chat completions, streaming, function calling), the implementation reuses the existing OpenAI provider with minimal overrides — just the default API key env var and base URL.openaiSDK that CrewAI already depends on.Test plan
pytest tests/llms/avian/test_avian.py)Note
Medium Risk
Moderate risk because it changes core
LLMprovider routing/validation logic and adds a new native provider path, which could affect model/provider selection. Changes are largely additive and covered by new unit tests.Overview
Adds native Avian provider support by introducing
AvianCompletion(anOpenAICompletionsubclass) that targets Avian’s default base URL, requiresAVIAN_API_KEY, and reports correct large context-window sizes.Updates
LLMrouting/validation to recognize theavian/prefix andprovider="avian", adds Avian models to constants and CLI provider/model selections, and expands docs to include Avian setup/examples.Includes a dedicated Avian test suite covering routing, auth/env/base URL overrides, model validation, and context window sizing.
Written by Cursor Bugbot for commit 3aa3f45. This will update automatically on new commits. Configure here.