Skip to content

feat: add OAuth support for OpenAI and Anthropic subscriptions#72

Open
Shriiii01 wants to merge 106 commits intoekailabs:mainfrom
Shriiii01:feature/oauth-support
Open

feat: add OAuth support for OpenAI and Anthropic subscriptions#72
Shriiii01 wants to merge 106 commits intoekailabs:mainfrom
Shriiii01:feature/oauth-support

Conversation

@Shriiii01
Copy link
Copy Markdown
Contributor

@Shriiii01 Shriiii01 commented Feb 10, 2026

Adds OAuth support so users can use their existing OpenAI and Anthropic (Claude) subscriptions instead of API keys.

  • New routes: GET /oauth/:provider/authorize, GET /oauth/:provider/callback, GET /oauth/status, DELETE /oauth/:provider
  • PKCE-based flow; tokens stored under .ekai/ and refreshed when expired
  • Chat completions, messages, and responses passthroughs use OAuth tokens when available (fallback to API keys)
  • Config: OPENAI_OAUTH_CLIENT_ID for OpenAI; Anthropic currently uses placeholder config
  • .env.example and .gitignore updated

Addresses the OAuth/subscription support you mentioned in the Ollama PR.

Shriiii01 and others added 5 commits February 7, 2026 16:54
- Add OllamaProvider class with OpenAI-compatible API support
- Register Ollama in ProviderRegistry with model selection rules
- Add Ollama configuration to AppConfig (baseUrl, apiKey, enabled)
- Add Ollama to chat_completions_providers_v1.json catalog with 16 popular models
- Add ollama.yaml pricing file (free/local models)
- Update ProviderName type to include 'ollama'
- Add OLLAMA_BASE_URL and OLLAMA_API_KEY to .env.example

Ollama runs models locally and exposes an OpenAI-compatible API at
http://localhost:11434/v1 by default. Users can configure a custom
base URL via OLLAMA_BASE_URL environment variable.
- Added Ollama to responses_providers_v1.json catalog
- Created OllamaResponsesPassthrough class implementing Responses API
- Registered Ollama in responses-passthrough-registry.ts

Ollama supports the OpenResponses API specification at /v1/responses endpoint,
providing future-proof support as /chat/completions may be deprecated.
@Shriiii01
Copy link
Copy Markdown
Contributor Author

1)OpenAI OAuth Support:
This is fully implemented according to public documentation (https://platform.openai.com/docs/guides/oauth).
The endpoints (auth.openai.com), scopes (openai.public), and PKCE flow are standard.
It is ready to use immediately by setting the OPENAI_OAUTH_CLIENT_ID environment variable.
2)Anthropic OAuth Support:
The underlying infrastructure (token exchange logic, PKCE security, refresh token handling) is fully implemented and shared with the OpenAI flow, as both follow the OAuth 2.0 standard.
However, since public documentation for Anthropic's OAuth endpoints is limited/beta, the specific URLs (e.g., https://claude.ai/oauth/authorize) and the client_id used in gateway/src/infrastructure/auth/oauth-service.ts are placeholders based on common conventions.
Action Required: You will likely need to update the ANTHROPIC_AUTH_URL, ANTHROPIC_TOKEN_URL, and ANTHROPIC_CLIENT_ID constants with the correct values from your specific Anthropic integration.
Summary: The feature is complete and working for OpenAI. For Anthropic, the heavy lifting (logic/security) is done, but the specific configuration details may need adjustment based on your credentials.

sm86 and others added 21 commits February 12, 2026 18:47
# Conflicts:
#	gateway/src/infrastructure/config/app-config.ts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ntegration

Add OpenRouter integration with unified launcher and Docker fixes
Changed 'wait -n' to 'wait' to keep the container running indefinitely
instead of exiting when the first service exits. This allows all
services (gateway, dashboard, memory, openrouter) to continue running.

Fixes issue where container would restart repeatedly with exit code 0.
- Document all 4 services and their ports (gateway, dashboard, memory, openrouter)
- Add Docker service control via ENABLE_* environment variables
- Clarify OpenRouter integration service runs on port 4010 (not 4006)
- Add Docker Compose section with service management instructions
- Document Docker restart behavior and service lifecycle
sm86 and others added 26 commits February 25, 2026 15:48
Pass userId to getRecent() in graph visualization overview branch so
the knowledge graph respects user selection. Tighten user_scope filters
to match exactly (no null fallthrough). Remove unnecessary back arrow
from memory page header since memory is the main page.
Add user scope filtering for semantic memories on dashboard
No caller ever passed deduplicate=true, making the opt-in flag dead code.
Ingestion was disabled in the OpenRouter proxy due to runaway memory growth
from lack of dedup. Now dedup runs unconditionally for all sectors.
With dedup always-on, strengthenFact() is never called. strength is
always 1.0, so strengthScore = log(1.0) = 0 — contributing nothing
to scoring. Remove from types, scoring, store, and DDL.
Agents can now set a relevancePrompt that gates ingest — an LLM checks
if incoming content matches the agent's scope before extraction/embedding.
Irrelevant content is rejected early with a reason. Adds GET/PUT single
agent endpoints and updates README with new API docs and flow diagram.
Replace silent INSERT OR IGNORE with a strict existence check that throws
agent_not_found. Only the default agent is auto-created at init via a
private upsertDefaultAgent(). Add routeError helper to router so all
catch blocks return 404 on agent_not_found instead of 500.
- New /agents page: agent cards with soul/relevance prompts, per-agent
  stats (users, episodic, semantic, procedural), edit/create/delete modals
- api.ts: add soulMd + relevancePrompt to getAgents() type; add
  createAgent() and updateAgent() calls
- layout.tsx: add global top nav with Memory Vault and Agents links
- memory/page.tsx: offset sticky header to top-11 to clear global nav
Dedup, relevance gate, and agents dashboard
Generate standalone package-lock.json files for memory and
integrations/openrouter, and switch all runtime stages from
npm install --omit=dev to npm ci --omit=dev. This eliminates
npm registry calls during Docker builds, fixing intermittent
403 rate-limit failures in CI.
Add sqlite-vec extension to @ekai/memory, replacing the 200-row
linear scan + JS cosine similarity with proper ANN indexing via
vec0 virtual tables with cosine distance metric.

- Add sqlite-vec dependency, load extension on construction
- Create vec0 virtual tables (memory_vec, procedural_vec,
  semantic_vec, reflective_vec) lazily on first embedding
- Insert into vec tables alongside main tables on write path
- Replace getCandidatesForSector 200-row scan with two-step
  KNN: ANN query on vec table, then filter via main table
- Replace findDuplicate linear scans with vec KNN queries
- Update scoreRowPBWM to accept precomputed similarity
- Make embedding optional on record types (query results
  no longer carry full embedding arrays)
- Stop selecting embedding in semantic graph traversal queries
- Clean up vec tables on delete operations
Embedding is always present on write and never read on query path
(similarity is precomputed by sqlite-vec). Graph traversal methods
now return Omit<SemanticMemoryRecord, 'embedding'> since they are
structural queries that don't select the embedding column.
Add sqlite-vec ANN vector search to @ekai/memory
The standalone memory/package-lock.json was stale after sqlite-vec was
added to package.json, causing npm ci to fail in the Docker build.
Regenerate memory lockfile to include sqlite-vec dependency
Lifecycle event logger that registers all 13 OpenClaw hooks via
api.registerHook() and appends JSONL entries with safe serialization.
Published to npm as @ekai/contexto.
Extract event storage from openclaw plugin into @ekai/store workspace
with EventWriter (normalization, safe serialization, per-session JSONL
files) and EventReader (session listing, reconstruction with tool call
pairing and userId attribution). Includes path-traversal protection,
chronological event ordering, and 48 tests.
- Fix durationMs: 0 being overwritten by computed value (falsy check → else-if)
- Sync runtime configSchema with manifest (declare dataDir property)
- Reorder root build: install first for clean-env safety, store before dependents
- Remove redundant double-resolve in reconstructSession
- Fix misleading test name for raw ID storage behavior
- Simplify rawAgentId/rawSessionId storage: remove dead !== check since
  sanitizeId always appends a hash suffix, just check if input is present
- Add _error optional field to StoreEvent and AppendInput interfaces to
  reflect the serialization-failure fallback that appears in JSONL output
Add OpenClaw plugin and JSONL event store
- Resolve .env.example: keep OAuth vars and add upstream service toggles
- Keep gateway/ and OAuth implementation (upstream removed gateway)
- Resolve README.md: keep archived notice and project description
- Resolve README.md: archived notice and project description
- Resolve .env.example: keep ports and OAuth vars, add PORT
- Resolve app-config: keep OAuth validation
- Accept upstream model_catalog and shared/types
@Shriiii01 Shriiii01 force-pushed the feature/oauth-support branch 2 times, most recently from 634c425 to cb7ed17 Compare March 7, 2026 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants