Skip to content

feat: detect credentials in interactive model setup menu#273

Closed
bussyjd wants to merge 8 commits intomainfrom
feat/model-setup-credential-detection
Closed

feat: detect credentials in interactive model setup menu#273
bussyjd wants to merge 8 commits intomainfrom
feat/model-setup-credential-detection

Conversation

@bussyjd
Copy link
Collaborator

@bussyjd bussyjd commented Mar 17, 2026

Summary

  • When obol model setup runs without --provider, the interactive menu now detects existing API keys and Ollama availability, showing badges next to each provider (e.g., "Anthropic (anthropic) — detected: ANTHROPIC_API_KEY")
  • If the user selects a provider with a detected credential, they are offered the choice to reuse it instead of being prompted for a new key
  • Detected sources: ANTHROPIC_API_KEY, CLAUDE_CODE_OAUTH_TOKEN (Anthropic), OPENAI_API_KEY (OpenAI), Ollama reachability + model count

Test plan

  • go build ./cmd/obol/ compiles cleanly
  • go vet ./cmd/obol/ passes
  • go test ./cmd/obol/ ./internal/model/ passes
  • Manual: run ANTHROPIC_API_KEY=test-key obol model setup — verify Anthropic shows detection badge and prompts to reuse
  • Manual: run obol model setup without env vars — verify no badges, normal key prompt
  • Manual: run obol model setup --provider anthropic --api-key sk-ant-xxx — verify flag-based path unchanged

Closes #272 (Unit 2)

OisinKyne and others added 8 commits March 17, 2026 14:43
The frontend catch-all `/` HTTPRoute had no hostname restriction,
meaning the entire UI (dashboard, sell modal, settings) was publicly
accessible through the Cloudflare tunnel. Add `hostnames: ["obol.stack"]`
to match the eRPC route pattern already in this branch.

Also add CLAUDE.md guardrails documenting the local-only vs public route
split and explicit NEVER rules to prevent future regressions.
Four fixes for the sell-inference cluster routing introduced in #267:

1. Security: bind gateway to 127.0.0.1 when NoPaymentGate=true so
   only cluster traffic (via K8s Service+Endpoints bridge) can reach
   the unpaid listener — no host/LAN exposure.

2. Critical: use parsed --listen port in Service, Endpoints, and
   ServiceOffer spec instead of hardcoded 8402. Non-default ports
   now work correctly.

3. k3s support: resolveHostIP() now checks DetectExistingBackend()
   for k3s and returns 127.0.0.1, matching the existing
   ollamaHostIPForBackend() strategy in internal/stack.

4. Migration: keep "obol-agent" as default instance ID to preserve
   existing openclaw-obol-agent namespaces on upgrade. Avoids
   orphaned deployments when upgrading from pre-#267 installs.

Also bumps frontend to v0.1.13-rc.1.
When ~/.openclaw/openclaw.json specifies a cloud model as the agent's
primary (e.g. anthropic/claude-sonnet-4-6), autoConfigureLLM() now
detects the provider and API key from the environment (or .env in dev
mode) and configures LiteLLM before OpenClaw setup runs. This makes
agent chat work out of the box without a separate `obol model setup`.

Changes:
- internal/stack: add autoConfigureCloudProviders() with env + .env
  key resolution (dev-mode only for .env)
- internal/model: export ProviderFromModelName(), ProviderEnvVar();
  add HasProviderConfigured(), LoadDotEnv()
- cmd/obol/model: update defaults — claude-sonnet-4-6, gpt-4.1
- internal/model: update WellKnownModels with current flagship models
  (claude-opus-4-6, gpt-5.4, gpt-4.1, o4-mini)
- obolup.sh: add check_agent_model_api_key() to warn users before
  cluster start if a required API key is missing
Split ConfigureLiteLLM into PatchLiteLLMProvider (config-only) and
RestartLiteLLM (restart+wait). autoConfigureLLM now patches Ollama
and cloud providers first, then does one restart — halving startup
time when both are configured.
Instead of printing a warning that users miss, prompt for the API key
during setup when a cloud model is detected in ~/.openclaw config.
The key is exported so the subsequent obol bootstrap → stack up →
autoConfigureLLM picks it up automatically. Falls back to a warning
in non-interactive mode.

Inspired by hermes-agent's interactive setup wizard pattern.
When `obol model setup` runs without --provider, the interactive menu
now checks the environment for existing API keys and Ollama availability,
showing detection badges next to each provider. If the user picks a
provider with a detected credential, they are offered the option to
reuse it instead of being prompted for a new key.

Detected sources:
- Anthropic: ANTHROPIC_API_KEY, CLAUDE_CODE_OAUTH_TOKEN
- OpenAI: OPENAI_API_KEY
- Ollama: reachable with N model(s) available

The flag-based path (--provider, --api-key) is unchanged.

Closes #272 (Unit 2)
@bussyjd bussyjd requested review from OisinKyne March 17, 2026 10:47
Base automatically changed from integration/pr267-reviewed to main March 18, 2026 00:25
@bussyjd
Copy link
Collaborator Author

bussyjd commented Mar 19, 2026

Superseded by #283 (combined with #274)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: interactive provider setup with OAuth support in obol model setup

2 participants