-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Open
Labels
area:integrationIntegrations (context providers, model providers, etc.)Integrations (context providers, model providers, etc.)ide:vscodeRelates specifically to VS Code extensionRelates specifically to VS Code extensionkind:bugIndicates an unexpected problem or unintended behaviorIndicates an unexpected problem or unintended behavioros:linuxHappening specifically on LinuxHappening specifically on Linux
Description
Before submitting your bug report
- I've tried using the "Ask AI" feature on the Continue docs site to see if the docs have an answer
- I'm not able to find a related conversation on GitHub discussions that reports the same bug
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Linux
- Continue version: v1.3.30
- IDE version: VSCode 1.109.0
- Model: Qwen/Qwen3-Coder-Next:novita
- config:
name: Local Config
version: 1.0.0
schema: v1
models:
- name: huggingfacembe
provider: huggingface-inference-providers
model: Qwen/Qwen3-VL-Embedding-2B
apiKey: hf_XXXXXX
apiBase: https://router.huggingface.co/v1
roles:
- embed
- name: huggingfaceAutocomplete
provider: huggingface-inference-providers
model: Qwen/Qwen2.5-Coder-7B:fastest
apiKey: hf_XXXXXX
apiBase: https://router.huggingface.co/v1
roles:
- autocomplete
- name: huggingface
provider: huggingface-inference-providers
model: Qwen/Qwen3-Coder-Next:cheapest
apiKey: hf_XXXXXXX
apiBase: https://router.huggingface.co/v1
roles:
- chat
- edit
- applyDescription
I’m trying to use Hugging Face models in Continue (VS Code extension) by following the documented configuration format. However, the configured model never shows up in the model picker UI.
If I modify the provider to:
provider: huggingface-inference-apithen it does show up in the picker but using that provider results in 404 errors from Hugging Face.
This makes it impossible to use Hugging Face inference providers the way the docs describe: Continue doesn’t show models for huggingface-inference-providers, but only for huggingface-inference-api, which fails in practice.
To reproduce
Add a Hugging Face inference model definition in config.json or VS Code config, e.g.:
- name: huggingface
provider: huggingface-inference-providers
model: Qwen/Qwen3-Coder-Next:cheapest
apiKey: hf_XXXXX
apiBase: https://router.huggingface.co/v1
roles:
- chatLog output
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
area:integrationIntegrations (context providers, model providers, etc.)Integrations (context providers, model providers, etc.)ide:vscodeRelates specifically to VS Code extensionRelates specifically to VS Code extensionkind:bugIndicates an unexpected problem or unintended behaviorIndicates an unexpected problem or unintended behavioros:linuxHappening specifically on LinuxHappening specifically on Linux
Type
Projects
Status
Todo