Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new QuickSilver database implementation (core/quick-silver) with CID storage, index service, cloud WebSocket API/CLI, Cloudflare Durable Object routing, encoding utilities (Ende), tests and packaging, plus a one-line debug log in the IndexedDB gateway put method. Changes
Sequence DiagramsequenceDiagram
actor Client
participant Fireproof as fireproof(name)
participant QS as QuickSilver
participant CID as CIDStorageService
participant Idx as IdxService
participant Gateway as IndexedDBGateway
participant DB as Dexie/OPFS
Client->>Fireproof: fireproof(name, opts)
Fireproof->>QS: return/create QuickSilver instance
Client->>QS: put(doc)
QS->>CID: store envelope (CBOR)
CID->>DB: persist bytes
DB-->>CID: ack
CID-->>QS: { cid, url, size }
QS->>Idx: addToIdx(id, cid, url, meta)
Idx->>Gateway: write index entries
Gateway->>DB: indexeddb write
DB-->>Gateway: ack
Gateway-->>Idx: result
Idx-->>QS: result
QS-->>Client: DocResponse (id, clock)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (2)
core/types/base/types.ts (1)
158-166:SuperThisOptsis missingende— injection/override is impossible.
SuperThisexposesende: EndebutSuperThisOptshas no corresponding field, so neither theclone()method nor any factory acceptingPartial<SuperThisOpts>can inject a customEndeimplementation. This makes unit-testing code that relies onsthis.endeharder, as callers cannot substitute a test double.If this is intentional (e.g.,
endeis always derived fromtxt), consider at minimum adding an optionalreadonly ende?: EndetoSuperThisOptsso downstream tests can override encoding behaviour without monkey-patching.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/types/base/types.ts` around lines 158 - 166, SuperThisOpts lacks an optional ende field so callers cannot inject/override Ende (used by SuperThis and referenced as sthis.ende), breaking clone() and factories that accept Partial<SuperThisOpts>; add readonly ende?: Ende to SuperThisOpts (keeping it optional) and update any factories/clone logic that construct SuperThis to prefer opts.ende over deriving from txt so tests can pass a test double without monkey-patching.core/quick-silver/envelope.ts (1)
25-35: Prefer idiomatic!(... instanceof type.errors)over=== falseWhile
QCDoc(x) instanceof type.errors === falseis logically correct (JS operator precedence makesinstanceofbind tighter than===), the arktype docs consistently use the!negation form as the idiom for the "valid" branch. The=== falseformulation forces readers to mentally verify operator precedence, adding unnecessary cognitive overhead.♻️ Proposed refactor
export function isQCDoc(x: unknown): x is QCDoc { - return QCDoc(x) instanceof type.errors === false; + return !(QCDoc(x) instanceof type.errors); } export function isQCFile(x: unknown): x is QCFile { - return QCFile(x) instanceof type.errors === false; + return !(QCFile(x) instanceof type.errors); } export function isQCEnvelope(x: unknown): x is QCEnvelope { - return QCEnvelope(x) instanceof type.errors === false; + return !(QCEnvelope(x) instanceof type.errors); }Per the arktype docs, the canonical pattern for checking a valid result is
if (out instanceof type.errors)for the error branch, which means the valid-branch negation should be!(out instanceof type.errors).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/envelope.ts` around lines 25 - 35, The three type-guard functions (isQCDoc, isQCFile, isQCEnvelope) use `... instanceof type.errors === false`; change each to use the idiomatic negation `!(... instanceof type.errors)` so the valid branch reads `return !(QCDoc(x) instanceof type.errors)` (and similarly for QCFile and QCEnvelope) to match arktype docs and improve readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/gateways/indexeddb/gateway-impl.ts`:
- Line 170: Remove the leftover debug statement console.log(">>>>>>",
url.toString()); — locate that exact call in
core/gateways/indexeddb/gateway-impl.ts (near where the structured logger is
used to log the URL) and delete it so only the structured logger call remains;
ensure you do not alter the subsequent logger invocation that records
url.toString().
In `@core/quick-silver/fireproof.test.ts`:
- Around line 6-10: The test creates a singleton via fireproof("db") but never
cleans it up, leaving the "db" entry in databasesByName and breaking isolation;
fix it by declaring db1 (and db2) in the outer scope of the test suite like
other suites, add beforeEach to initialize them (const db1 = fireproof("db"))
and add afterEach that calls the appropriate cleanup—preferably db1.destroy() if
available, otherwise call databasesByName.unget("db")—so the singleton is
removed between tests and the identity assertion in the it("one instance per
dbname") remains isolated.
- Around line 139-149: The test creates a singleton via fireproof("close-test")
but never cleans it up, so on reruns its onClosed listeners accumulate; fix by
adding cleanup that calls destroy() for that singleton after the test(s). For
example, add an afterEach async hook that retrieves the instance via
fireproof("close-test") and awaits its destroy() (or calls destroy() on the
local db used in the test) so db.onClosed(...) listeners are removed before the
next run; ensure you reference the same identifier (fireproof("close-test"),
db.onClosed, db.close, destroy) when adding the cleanup.
In `@core/quick-silver/fireproof.ts`:
- Around line 9-11: The fireproof function currently ignores opts.sthis on cache
hits because databasesByName.get(name).once(...) returns the first-created
instance; update fireproof (or its JSDoc) to surface this behavior: either add a
clear JSDoc on the fireproof function stating that opts (including opts.sthis)
are only applied the first time a given name is created, or implement a guard in
fireproof that checks the existing cached Database instance (from
databasesByName.get(name)) and if opts?.sthis is provided and differs from the
instance’s stored sthis, throw an error or log a clear warning; reference the
fireproof function, databasesByName.get(name).once, and the QuickSilver
constructor/ensureSuperThis to locate the relevant code paths.
- Line 7: The cache stored in databasesByName (KeyedResolvOnce) retains closed
QuickSilver instances so fireproof(name) can return a closed DB; update the
QuickSilver creation/return path to unregister the entry when the DB is closed
by wiring a callback from the QuickSilver instance (e.g., add an onClosed or
close hook on the returned Database/QuickSilver) that calls
databasesByName.remove(name) or the appropriate KeyedResolvOnce unregister
method; ensure this hook is installed where the Database/QuickSilver is created
and returned so any db.close() triggers removal from databasesByName allowing
subsequent fireproof(name) calls to create a fresh instance.
In `@core/quick-silver/quick-silver.ts`:
- Around line 127-134: The synchronous call to
this.sthis.ende.cbor.encodeToUint8 inside envelopes.map can throw and escape
Promise.allSettled; wrap the encode and subsequent _gateway.put call so any
synchronous encode errors are converted into rejected promises (e.g., use
Promise.resolve().then(() =>
this.sthis.ende.cbor.encodeToUint8(envelope)).then(bytes =>
this._gateway.put(url, bytes, this.sthis))) ensuring every mapper returns a
promise; update the mapper that builds key/URL (uses isQCDoc,
BuildURI.from(...).setParam(PARAM.KEY, key).URI()) to follow this pattern so
Promise.allSettled receives only promises and aggregates encode and network
errors consistently.
- Around line 24-28: Remove the unused URI import from the cement import line
and consolidate the two separate imports from "@fireproof/core-types-base" into
a single import that includes both PARAM and NotFoundError; specifically, update
the import that currently reads "import { BuildURI, Lazy, Logger, OnFunc, URI }
from '@adviser/cement';" to drop URI, and merge the separate "import { PARAM }
..." and "import { NotFoundError } ..." into one line "import { PARAM,
NotFoundError } from '@fireproof/core-types-base';" so there are no duplicate
imports and no unused symbols.
- Around line 82-93: In get<T extends DocTypes>, the code currently re-throws
raw gateway errors from this._gateway.get (result.Err()) so callers can't
reliably detect NotFoundError; update the result.Err() handling to detect the
gateway's "not found" condition (e.g., inspect the error type/HTTP status/code
returned by this._gateway.get or use any existing isNotFound helper) and throw a
NotFoundError(id) in that case, otherwise re-throw the original error; keep the
rest of the flow (decoding via this.sthis.ende.cbor.decodeUint8 and isQCDoc
check) unchanged so only the gateway error branch maps missing-key errors to
NotFoundError.
In `@core/quick-silver/SessionLog.md`:
- Around line 1-37: CI failed due to Prettier formatting on
core/quick-silver/SessionLog.md; run the formatter on that file (prettier
--write core/quick-silver/SessionLog.md), stage the updated file, and
commit/push the change so the PR contains the formatted file and the Prettier
check will pass.
In `@core/quick-silver/vitest.config.ts`:
- Line 6: The vitest project in this config uses the duplicate name "indexeddb"
which conflicts with core/tests/vitest.indexeddb.config.ts; update the project's
name property (the "name" field in this vitest config) to a unique identifier
such as "quick-silver" so it no longer collides and --project filtering works
correctly.
---
Nitpick comments:
In `@core/quick-silver/envelope.ts`:
- Around line 25-35: The three type-guard functions (isQCDoc, isQCFile,
isQCEnvelope) use `... instanceof type.errors === false`; change each to use the
idiomatic negation `!(... instanceof type.errors)` so the valid branch reads
`return !(QCDoc(x) instanceof type.errors)` (and similarly for QCFile and
QCEnvelope) to match arktype docs and improve readability.
In `@core/types/base/types.ts`:
- Around line 158-166: SuperThisOpts lacks an optional ende field so callers
cannot inject/override Ende (used by SuperThis and referenced as sthis.ende),
breaking clone() and factories that accept Partial<SuperThisOpts>; add readonly
ende?: Ende to SuperThisOpts (keeping it optional) and update any
factories/clone logic that construct SuperThis to prefer opts.ende over deriving
from txt so tests can pass a test double without monkey-patching.
| return exception2Result(async () => { | ||
| const key = getKey(url, sthis.logger); | ||
| const store = getStore(url, sthis, joinDBName).name; | ||
| console.log(">>>>>>", url.toString()); |
There was a problem hiding this comment.
Remove debug console.log before merging.
This is a leftover debugging artifact. The structured logger call on the next line already provides the same URL information through the proper logging channel.
🧹 Proposed fix
- console.log(">>>>>>", url.toString());
sthis.logger.Debug().Url(url).Str("key", key).Str("store", store).Msg("putting");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| console.log(">>>>>>", url.toString()); | |
| sthis.logger.Debug().Url(url).Str("key", key).Str("store", store).Msg("putting"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/gateways/indexeddb/gateway-impl.ts` at line 170, Remove the leftover
debug statement console.log(">>>>>>", url.toString()); — locate that exact call
in core/gateways/indexeddb/gateway-impl.ts (near where the structured logger is
used to log the URL) and delete it so only the structured logger call remains;
ensure you do not alter the subsequent logger invocation that records
url.toString().
| it("one instance per dbname", () => { | ||
| const db1 = fireproof("db"); | ||
| const db2 = fireproof("db"); | ||
| expect(db1).toBe(db2); | ||
| }); |
There was a problem hiding this comment.
No cleanup for the singleton test — "db" entry persists in databasesByName
Since there is no afterEach / destroy() here, the "db" entry remains in the singleton map for the lifetime of the test process. Currently harmless because no other describe block reuses the name "db", but it defeats the point of testing singleton identity in isolation and will interact badly once databasesByName.unget(name) is implemented (see SessionLog TODO).
🛠️ Proposed fix
+ afterEach(async () => {
+ await db1.destroy();
+ });
+
it("one instance per dbname", () => {
const db1 = fireproof("db");
const db2 = fireproof("db");
expect(db1).toBe(db2);
});(Declare db1 in the outer scope like the other suites, and add beforeEach/afterEach accordingly.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/fireproof.test.ts` around lines 6 - 10, The test creates a
singleton via fireproof("db") but never cleans it up, leaving the "db" entry in
databasesByName and breaking isolation; fix it by declaring db1 (and db2) in the
outer scope of the test suite like other suites, add beforeEach to initialize
them (const db1 = fireproof("db")) and add afterEach that calls the appropriate
cleanup—preferably db1.destroy() if available, otherwise call
databasesByName.unget("db")—so the singleton is removed between tests and the
identity assertion in the it("one instance per dbname") remains isolated.
| describe("close", () => { | ||
| it("should invoke onClosed listeners", async () => { | ||
| const db = fireproof("close-test"); | ||
| let called = 0; | ||
| db.onClosed(() => { | ||
| called++; | ||
| }); | ||
| await db.close(); | ||
| expect(called).toBe(1); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
close test risks accumulating onClosed listeners on re-runs
The test never calls destroy() and has no afterEach. The "close-test" singleton therefore survives across test re-runs (watch mode). On a second run fireproof("close-test") returns the same already-closed instance; calling db.onClosed(...) pushes a second listener onto the same OnFunc. If OnFunc.invoke() does not clear listeners after firing, close() on the next run triggers both the old and the new listener, causing expect(called).toBe(1) to fail.
🛠️ Proposed fix
describe("close", () => {
+ let db: Database;
+ beforeEach(() => {
+ db = fireproof("close-test");
+ });
+ afterEach(async () => {
+ await db.destroy();
+ });
it("should invoke onClosed listeners", async () => {
- const db = fireproof("close-test");
let called = 0;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe("close", () => { | |
| it("should invoke onClosed listeners", async () => { | |
| const db = fireproof("close-test"); | |
| let called = 0; | |
| db.onClosed(() => { | |
| called++; | |
| }); | |
| await db.close(); | |
| expect(called).toBe(1); | |
| }); | |
| }); | |
| describe("close", () => { | |
| let db: Database; | |
| beforeEach(() => { | |
| db = fireproof("close-test"); | |
| }); | |
| afterEach(async () => { | |
| await db.destroy(); | |
| }); | |
| it("should invoke onClosed listeners", async () => { | |
| let called = 0; | |
| db.onClosed(() => { | |
| called++; | |
| }); | |
| await db.close(); | |
| expect(called).toBe(1); | |
| }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/fireproof.test.ts` around lines 139 - 149, The test creates
a singleton via fireproof("close-test") but never cleans it up, so on reruns its
onClosed listeners accumulate; fix by adding cleanup that calls destroy() for
that singleton after the test(s). For example, add an afterEach async hook that
retrieves the instance via fireproof("close-test") and awaits its destroy() (or
calls destroy() on the local db used in the test) so db.onClosed(...) listeners
are removed before the next run; ensure you reference the same identifier
(fireproof("close-test"), db.onClosed, db.close, destroy) when adding the
cleanup.
| import type { QSConfigOpts } from "./types.js"; | ||
| import { QuickSilver } from "./quick-silver.js"; | ||
|
|
||
| const databasesByName = new KeyedResolvOnce<Database>(); |
There was a problem hiding this comment.
Closed instances are never evicted from databasesByName.
After db.close() is called on a returned QuickSilver, the singleton cache still holds the now-closed instance. Any subsequent call to fireproof(sameName) will return the closed instance, making it impossible to re-open the database without a process restart or test isolation workaround.
Consider unregistering the entry when close() is invoked, e.g. by wiring an onClosed callback back to the cache:
💡 Suggested approach
export function fireproof(name: string, opts?: QSConfigOpts): Database {
- return databasesByName.get(name).once(() => new QuickSilver({ sthis: opts?.sthis ?? ensureSuperThis(), name }));
+ return databasesByName.get(name).once(() => {
+ const db = new QuickSilver({ sthis: opts?.sthis ?? ensureSuperThis(), name });
+ db.onClosed(() => databasesByName.unget(name));
+ return db;
+ });
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/fireproof.ts` at line 7, The cache stored in
databasesByName (KeyedResolvOnce) retains closed QuickSilver instances so
fireproof(name) can return a closed DB; update the QuickSilver creation/return
path to unregister the entry when the DB is closed by wiring a callback from the
QuickSilver instance (e.g., add an onClosed or close hook on the returned
Database/QuickSilver) that calls databasesByName.remove(name) or the appropriate
KeyedResolvOnce unregister method; ensure this hook is installed where the
Database/QuickSilver is created and returned so any db.close() triggers removal
from databasesByName allowing subsequent fireproof(name) calls to create a fresh
instance.
core/quick-silver/quick-silver.ts
Outdated
| async get<T extends DocTypes>(id: string): Promise<DocWithId<T>> { | ||
| await this.ready(); | ||
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, id).URI(); | ||
| const result = await this._gateway.get(url, this.sthis); | ||
| if (result.isErr()) throw result.Err(); | ||
| const decoded = this.sthis.ende.cbor.decodeUint8<QCDoc>(result.Ok()); | ||
| if (decoded.isErr()) throw decoded.Err(); | ||
| const envelope = decoded.Ok(); | ||
| if (!isQCDoc(envelope)) { | ||
| throw new NotFoundError(`not a doc: ${id}`); | ||
| } | ||
| return { _id: envelope.id, ...(envelope.payload as T) } as DocWithId<T>; |
There was a problem hiding this comment.
get() doesn't surface NotFoundError for missing keys from the gateway
NotFoundError is only thrown on line 91 when the stored bytes decode to something other than a QCDoc. If the gateway itself returns an error for a missing key (the common not-found path), line 87 re-throws the raw gateway error. Callers (and future tests) that catch (e) and check e instanceof NotFoundError will miss it.
🛠️ Proposed fix
const result = await this._gateway.get(url, this.sthis);
- if (result.isErr()) throw result.Err();
+ if (result.isErr()) throw new NotFoundError(`not found: ${id}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async get<T extends DocTypes>(id: string): Promise<DocWithId<T>> { | |
| await this.ready(); | |
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, id).URI(); | |
| const result = await this._gateway.get(url, this.sthis); | |
| if (result.isErr()) throw result.Err(); | |
| const decoded = this.sthis.ende.cbor.decodeUint8<QCDoc>(result.Ok()); | |
| if (decoded.isErr()) throw decoded.Err(); | |
| const envelope = decoded.Ok(); | |
| if (!isQCDoc(envelope)) { | |
| throw new NotFoundError(`not a doc: ${id}`); | |
| } | |
| return { _id: envelope.id, ...(envelope.payload as T) } as DocWithId<T>; | |
| async get<T extends DocTypes>(id: string): Promise<DocWithId<T>> { | |
| await this.ready(); | |
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, id).URI(); | |
| const result = await this._gateway.get(url, this.sthis); | |
| if (result.isErr()) throw new NotFoundError(`not found: ${id}`); | |
| const decoded = this.sthis.ende.cbor.decodeUint8<QCDoc>(result.Ok()); | |
| if (decoded.isErr()) throw decoded.Err(); | |
| const envelope = decoded.Ok(); | |
| if (!isQCDoc(envelope)) { | |
| throw new NotFoundError(`not a doc: ${id}`); | |
| } | |
| return { _id: envelope.id, ...(envelope.payload as T) } as DocWithId<T>; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 82 - 93, In get<T extends
DocTypes>, the code currently re-throws raw gateway errors from
this._gateway.get (result.Err()) so callers can't reliably detect NotFoundError;
update the result.Err() handling to detect the gateway's "not found" condition
(e.g., inspect the error type/HTTP status/code returned by this._gateway.get or
use any existing isNotFound helper) and throw a NotFoundError(id) in that case,
otherwise re-throw the original error; keep the rest of the flow (decoding via
this.sthis.ende.cbor.decodeUint8 and isQCDoc check) unchanged so only the
gateway error branch maps missing-key errors to NotFoundError.
core/quick-silver/quick-silver.ts
Outdated
| const results = await Promise.allSettled( | ||
| envelopes.map((envelope) => { | ||
| const key = isQCDoc(envelope) ? envelope.id : envelope.cid; | ||
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, key).URI(); | ||
| const bytes = this.sthis.ende.cbor.encodeToUint8(envelope); | ||
| return this._gateway.put(url, bytes, this.sthis); | ||
| }), | ||
| ); |
There was a problem hiding this comment.
Synchronous throw from encodeToUint8 escapes Promise.allSettled error aggregation
encodeToUint8(envelope) is called synchronously inside the .map() callback. If it throws (e.g. a payload: unknown containing a Symbol, circular reference, or anything else that CBOR can't serialize), the exception propagates out of .map() before Promise.allSettled ever receives the array. The structured error aggregation at lines 136–143 is bypassed and bulk rejects with a raw error.
🛠️ Proposed fix — wrap encode in a returned rejected promise so `allSettled` catches it
const results = await Promise.allSettled(
envelopes.map((envelope) => {
const key = isQCDoc(envelope) ? envelope.id : envelope.cid;
const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, key).URI();
- const bytes = this.sthis.ende.cbor.encodeToUint8(envelope);
- return this._gateway.put(url, bytes, this.sthis);
+ let bytes: Uint8Array;
+ try {
+ bytes = this.sthis.ende.cbor.encodeToUint8(envelope);
+ } catch (e) {
+ return Promise.reject(e);
+ }
+ return this._gateway.put(url, bytes, this.sthis);
}),
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const results = await Promise.allSettled( | |
| envelopes.map((envelope) => { | |
| const key = isQCDoc(envelope) ? envelope.id : envelope.cid; | |
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, key).URI(); | |
| const bytes = this.sthis.ende.cbor.encodeToUint8(envelope); | |
| return this._gateway.put(url, bytes, this.sthis); | |
| }), | |
| ); | |
| const results = await Promise.allSettled( | |
| envelopes.map((envelope) => { | |
| const key = isQCDoc(envelope) ? envelope.id : envelope.cid; | |
| const url = BuildURI.from(this._baseURL()).setParam(PARAM.KEY, key).URI(); | |
| let bytes: Uint8Array; | |
| try { | |
| bytes = this.sthis.ende.cbor.encodeToUint8(envelope); | |
| } catch (e) { | |
| return Promise.reject(e); | |
| } | |
| return this._gateway.put(url, bytes, this.sthis); | |
| }), | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 127 - 134, The synchronous
call to this.sthis.ende.cbor.encodeToUint8 inside envelopes.map can throw and
escape Promise.allSettled; wrap the encode and subsequent _gateway.put call so
any synchronous encode errors are converted into rejected promises (e.g., use
Promise.resolve().then(() =>
this.sthis.ende.cbor.encodeToUint8(envelope)).then(bytes =>
this._gateway.put(url, bytes, this.sthis))) ensuring every mapper returns a
promise; update the mapper that builds key/URL (uses isQCDoc,
BuildURI.from(...).setParam(PARAM.KEY, key).URI()) to follow this pattern so
Promise.allSettled receives only promises and aggregates encode and network
errors consistently.
| # Quick-Silver Session Log | ||
|
|
||
| ## Session 2026-02-21 | ||
|
|
||
| ### What was built | ||
| New offline database in `core/quick-silver/` implementing the Fireproof `Database` interface from scratch. | ||
|
|
||
| ### Files created | ||
| - `fireproof.ts` — `fireproof(name, opts?)` factory, singleton per name via `KeyedResolvOnce<Database>` | ||
| - `quick-silver.ts` — `QuickSilver` class implementing `Database` | ||
| - `types.ts` — `QSConfigOpts { sthis?: SuperThis }` | ||
| - `envelope.ts` — arktype schemas `QCDoc`, `QCFile`, `QCEnvelope` + type guards `isQCDoc/isQCFile/isQCEnvelope` | ||
| - `fireproof.test.ts` — 12 passing tests | ||
| - `package.json` — dependencies: `@adviser/cement`, `@fireproof/core-runtime`, `@fireproof/core-gateways-indexeddb`, `@fireproof/core-types-base`, `arktype` | ||
|
|
||
| ### Key decisions | ||
| - `_baseURL` bakes in `PARAM.STORE = "file"` and `PARAM.NAME` — do NOT add at call sites | ||
| - `ready = Lazy(...)` — memoized start, called internally, not required by user | ||
| - `onClosed = OnFunc<() => void>()` — field IS callable to register listeners; `close()` calls `.invoke()` | ||
| - `bulk` builds `QCDoc` + `QCFile` envelopes, puts via `IndexedDBGateway`, fires `_updateListeners` / `_noUpdateListeners` | ||
| - `put` delegates to `bulk([doc])` | ||
| - `del` calls `gateway.delete(url)`, `remove` is alias | ||
| - `compact` is no-op | ||
|
|
||
| ### Changes to core packages | ||
| - `core/types/base/types.ts` — added `Ende`, `EndeJson`, `EndeCbor` interfaces; added `readonly ende: Ende` to `SuperThis` | ||
| - `core/runtime/utils.ts` — implemented `ende` in `SuperThisImpl` using `cborg` + `this.txt`; `decode*` wrapped in `exception2Result` | ||
|
|
||
| ### Status: 12 tests passing | ||
|
|
||
| ### TODO next session | ||
| - `changes` | ||
| - `allDocs` / `allDocuments` | ||
| - `query` | ||
| - Real file bytes in `QCFile` (currently `new Uint8Array()` placeholder) | ||
| - `databasesByName.unget(name)` on close/destroy so instances can be re-created | ||
| - `clone()` in `SuperThisImpl` needs `ende` wired in |
There was a problem hiding this comment.
Prettier formatting failure blocks CI
The pipeline reports:
Prettier formatting check failed for core/quick-silver/SessionLog.md. Run 'prettier --write' to fix code style issues.
Run prettier --write core/quick-silver/SessionLog.md before merging.
🧰 Tools
🪛 GitHub Actions: `@fireproof/core`
[error] 1-1: Prettier formatting check failed for core/quick-silver/SessionLog.md. Run 'prettier --write' to fix code style issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/SessionLog.md` around lines 1 - 37, CI failed due to
Prettier formatting on core/quick-silver/SessionLog.md; run the formatter on
that file (prettier --write core/quick-silver/SessionLog.md), stage the updated
file, and commit/push the change so the PR contains the formatted file and the
Prettier check will pass.
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (15)
cloud/quick-silver/api/package.json (1)
1-12: Missingscriptsandexportsfields.As a CLI/API package in a monorepo, consider adding at minimum a
buildscript and anexports(orbin) entry point so workspace tooling and consumers can locate the compiled output. Withoutscripts, CI pipelines that iterate over workspace packages will silently skip building this package.♻️ Suggested additions
{ "name": "@fireproof/cloud-quick-silver-api", "version": "0.0.0", "type": "module", "private": true, + "exports": { + ".": "./dist/index.js" + }, + "bin": { + "qs": "./dist/cli.js" + }, + "scripts": { + "build": "tsc --noEmit false --outDir dist", + "dev": "tsc --watch" + }, "dependencies": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/package.json` around lines 1 - 12, Add a minimal build script and an exports entry to package.json so workspace tooling and consumers can find the compiled output: update the package.json to include a "scripts" object with at least a "build" command (e.g., invoking your TypeScript or bundler step) and add an "exports" (or "bin" for a CLI) field that points to the compiled entry (e.g., main/dist file) so tools that iterate workspaces and consumers can locate the package entrypoint; target the package.json keys in this file (the "scripts" and "exports"/"bin" fields) when making the change.cloud/quick-silver/cf/qs-encoder.ts (1)
19-21:JSON.stringifyindecodecan throw on non-serializable values.
JSON.stringifythrows onBigIntvalues and can throw on circular references. Sincedataisunknown, wrapping this in error handling would be safer.Proposed fix
decode(data: unknown): Promise<Result<string>> { - return Promise.resolve(Result.Ok(JSON.stringify(data))); + try { + return Promise.resolve(Result.Ok(JSON.stringify(data))); + } catch (e) { + return Promise.resolve(Result.Err(e as Error)); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-encoder.ts` around lines 19 - 21, The decode method currently calls JSON.stringify(data) directly which can throw on BigInt or circular refs; update qs-encoder.ts's decode(data: unknown): Promise<Result<string>> to wrap JSON.stringify in try/catch, returning Result.Ok(string) on success and Result.Err with a descriptive error (or error.message) on failure; ensure you catch thrown values and convert them to a string for the Result.Err so callers of decode can handle serialization failures safely.core/quick-silver/quick-silver.ts (2)
189-199: Several methods throw "not implemented" — consider documenting the plan or using a shared sentinel.
changes,allDocs,allDocuments, andqueryall thrownew Error("not implemented"). This is fine for the initial implementation per the PR description, but callers that discover this at runtime get an unhelpful stack. A shared helper (e.g.,notImplemented(methodName)) or a custom error class would aid debugging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 189 - 199, Replace the ad-hoc throws in methods changes, allDocs, allDocuments (and any other stubs like query) with a shared sentinel so runtime errors include context: add a helper function notImplemented(methodName: string) or a NotImplementedError class and call it from each stub (e.g., throw notImplemented("changes")) so stack traces and logs indicate which method is unimplemented; update all occurrences of new Error("not implemented") in QuickSilver class to use this helper and include the method name and optional guidance text.
109-109: Object spread inget()return could theoretically shadow_idif data unexpectedly contains it.Line 109:
{ _id: envelope._.id, ...(envelope.data as T), _: ... }— the spread ofenvelope.datais placed after_id, so any_idindatawould overwrite the explicit one. Currentlybulk()strips_idviastripper(...)before storing, so this is safe in practice. However, data could come from an external sync source in the future.Consider placing the spread before
_idto make the explicit value authoritative:Proposed reorder
- return { _id: envelope._.id, ...(envelope.data as T), _: { ...envelope._, fileRefs: docFile } } as DocWithId<T>; + return { ...(envelope.data as T), _id: envelope._.id, _: { ...envelope._, fileRefs: docFile } } as DocWithId<T>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` at line 109, In get(), the object spread of envelope.data currently follows the explicit _id which allows a _id inside data to overwrite envelope._.id; change the return construction so the spread of (envelope.data as T) comes first and the explicit _id: envelope._.id is placed after it (keeping _: { ...envelope._, fileRefs: docFile } as-is) so the explicit envelope id is authoritative; also ensure this aligns with existing bulk()/stripper behavior but protects against future external data sources providing a conflicting _id.cloud/quick-silver/cf/qs-db-evento.ts (1)
79-81: Full table scan in query handler — no pagination or filtering.
SELECT id, cid, data, synced FROM docsreturns all rows without aLIMITorWHEREclause. For large databases this could be slow and memory-intensive, especially since each row'sdata(binary payload) is streamed through WebSocket messages. The comment in the types file notes "filter / pred: coming later" — consider adding at minimum aLIMITas a safety net.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-evento.ts` around lines 79 - 81, The query handler currently does a full table scan via sql.exec(...) selecting all rows from docs (SELECT id, cid, data, synced FROM docs) which can OOM; modify the handler around the cursor creation to use a safe paginated/batched query (add a LIMIT and OFFSET or a WHERE filter param) and iterate in chunks so large binary data is streamed in manageable batches; ensure sql.exec is called with parameterized LIMIT/OFFSET or a cursor-based predicate (e.g., last_id) and update the loop that processes the returned cursor to fetch successive pages rather than loading all rows at once.core/quick-silver/fireproof.test.ts (1)
142-144:NotFoundErrorassertion is too loose — only checks that something is thrown.
rejects.toThrow()without an argument accepts any error. This won't catch regressions ifget()starts throwing a different error type for missing keys. Consider asserting the specific error:Proposed stricter assertion
- await expect(db.get("missing")).rejects.toThrow(); + await expect(db.get("missing")).rejects.toThrow(/not found|NotFound/i);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/fireproof.test.ts` around lines 142 - 144, The test currently only asserts that something is thrown when calling db.get("missing"); tighten this to assert the specific NotFoundError type to prevent regressions: update the assertion on db.get("missing") to expect a NotFoundError (for example using Jest's rejects.toThrowError(NotFoundError) or rejects.toBeInstanceOf(NotFoundError)) so the test verifies db.get(...) throws the correct NotFoundError class rather than any error.cloud/quick-silver/types/index.ts (1)
35-37:QSQueryis an empty schema —isQSQuerymatches any object.Since
QSQueryistype({}), the guardisQSQuery(line 187) will returntruefor every non-null object. While the comment on line 36 notes filters are "coming later," be aware that any validation or routing logic usingisQSQueryprovides no discrimination until the schema is fleshed out.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/types/index.ts` around lines 35 - 37, QSQuery is defined as an empty schema (type({})), so the generated type guard isQSQuery will match any non-null object and provide no validation; update QSQuery (and its declaration) to include the intended properties/structure for queries (e.g., optional filter/predicate fields or a discriminant like { kind: 'query', ... } ) or replace it with a more specific union/record type so isQSQuery can meaningfully discriminate inputs; locate the QSQuery declaration and any uses of isQSQuery to add the proper fields/optional flags or a discriminant property and adjust dependent validation/routing accordingly.core/quick-silver/envelope.ts (1)
38-42:type.instanceOf(Uint8Array)may fail after CBOR round-trip through certain transports.If envelopes are ever decoded from CBOR that came over a network boundary (e.g., from a Cloudflare Durable Object response), the
payloadbytes could arrive as a plainArrayBufferor a typed-array subclass, failing theinstanceof Uint8Arraycheck. In the current local IndexedDB path this should be fine sincecborgreturns properUint8Arrayvalues, but keep this in mind as the sync layer matures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/envelope.ts` around lines 38 - 42, The QCFile schema uses type.instanceOf(Uint8Array) for payload which can fail when CBOR/transport yields an ArrayBuffer or other ArrayBufferView; update the QCFile payload validator to accept ArrayBuffer and ArrayBufferView (or a union of ArrayBuffer and Uint8Array) and ensure the decoded value is normalized into a Uint8Array (coerce via new Uint8Array(value) or similar) before further processing; target the QCFile declaration and its payload validator (and any deserialization/normalization path that consumes QCFile) so consumers always get a Uint8Array even if the input was an ArrayBuffer or other typed-array subclass.cloud/quick-silver/cf/qs-send-provider.ts (1)
16-19:ws.sendcan throw on a closed WebSocket — no local guard.If the WebSocket is already closed when
send()is called,ws.send()will throw synchronously. The caller chain relies on the Evento error handler to catch this. This works but is fragile — atry/catchhere would make the provider more robust and return aResult.Errinstead.Optional hardening
async send<T>(_ctx: HandleTriggerCtx<ArrayBuffer, unknown, unknown>, res: unknown): Promise<Result<T>> { - this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)); - return Result.Ok(res as T); + try { + this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)); + return Result.Ok(res as T); + } catch (e) { + return Result.Err(e as Error); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-send-provider.ts` around lines 16 - 19, The send method currently calls this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)) without guarding for a closed socket; wrap that call (including the encode step) in a try/catch inside async send<T>(_ctx: HandleTriggerCtx<ArrayBuffer, unknown, unknown>, res: unknown): Promise<Result<T>> so synchronous throws from ws.send are caught and you return Result.Err with the error, otherwise return Result.Ok(res as T); reference the send method, this.ws.send, this.sthis.ende.cbor.encodeToUint8, QSOpRes and the Result.Ok/Result.Err paths when making the change.cloud/quick-silver/api/cli.ts (2)
88-93:JSON.parsefailure produces a raw stack trace — consider a user-friendly error.If the user passes malformed JSON in
--pkg, theJSON.parseat line 92 throws a genericSyntaxError. Wrapping with a try-catch and re-throwing with context would improve the CLI experience.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/cli.ts` around lines 88 - 93, The JSON.parse call inside the mapping that builds ops from args.pkg can throw a raw SyntaxError for malformed JSON; wrap the JSON.parse(pair.slice(comma + 1)) in a try-catch inside the map (the block creating const ops: QSPut[]), and on error throw a new Error that includes the offending pair or key and a user-friendly message (e.g., `Invalid JSON for --pkg "<pair>"`), preserving the original error message for debugging; ensure you still pass the parsed object to sthis.ende.cbor.encodeToUint8 when successful.
128-147: Doubleapi.close()in subscribe — signal handler and post-loop both close.When a signal fires,
cleanup()closes the handle and api, then callsprocess.exit(). If the stream closes beforeprocess.exit()runs, thefor awaitloop exits and line 145 callsapi.close()again. This is benign (close is idempotent) but the logic is slightly tangled.A cleaner approach: have
cleanuponly close the handle (which terminates the stream), and let the normal post-loop code handleapi.close()and exit.Suggested simplification
const cleanup = () => { handle.close(); - api - .close() - .then(() => process.exit(0)) - .catch(() => process.exit(1)); }; process.on("SIGINT", cleanup); process.on("SIGTERM", cleanup); for await (const r of handle.events) { console.log(JSON.stringify(r)); } await api.close();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/cli.ts` around lines 128 - 147, The cleanup function currently closes both handle and api and calls process.exit(), causing api.close() to be invoked twice; change cleanup (used by process.on handlers) so it only closes the subscription handle (call handle.close() only) and does not call api.close() or process.exit(); keep the existing for await (const r of handle.events) loop and the post-loop await api.close() + process.exit handling so normal shutdown after the stream ends performs the API close and exit once.cloud/quick-silver/api/qs-api.ts (3)
71-78: CBOR decode failures are silently swallowed — consider logging.When
decoded.isErr()at line 73, the message is dropped with no diagnostic output. A log or metric here would help debug protocol mismatches or corrupt messages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 71 - 78, The CBOR decode failure in the ws.onmessage handler currently drops messages when decoded.isErr() is true; update the handler to log the decode error and some context before returning. Specifically, after calling sthis.ende.cbor.decodeUint8<QSMsg>(...) and when decoded.isErr() is true, capture the error (decoded.Err() or equivalent), and emit a diagnostic via your logging facility (e.g., this.logger.error or console.error) including the error, the raw evt.data size/preview, and a short note that the message was not processed; keep the existing early return and leave the successful path (decoded.Ok() → isQSMsg → this.onMessage.invoke(msg)) unchanged.
84-91: Potential double-abort on writer when connect fails and timeout fires.If
this.connect()rejects, the.catchblock (e.g., line 133-136) callswriter.abort(e), anddone.reject(e)causeswithTimeoutto also callwriter.abort(...). Both paths race to abort the same writer. While the second abort likely throws and is ignored, the control flow is fragile.Consider guarding with an
abortedflag, or resolvingdonein the catch path so the timeout branch becomes a no-op.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 84 - 91, The withTimeout function can race with the connect() rejection and call writer.abort twice; fix by adding a shared abort guard (e.g., an aborted boolean or an AbortController) that's closed when any path aborts so further abort calls are no-ops: update the connect() catch path that calls writer.abort(e) and done.reject(e) to set aborted = true (or call controller.abort()) before aborting, and modify withTimeout (inside timeouted.then) to check aborted before calling writer.abort(...) and to avoid calling unreg() twice; reference symbols: withTimeout, connect, done, writer, unreg, timeouted, DEFAULT_TIMEOUT_MS.
107-107:writer.write(msg)promises are not awaited — backpressure and errors are ignored.All
onMessagecallbacks callwriter.write(msg)withoutawait. If the writable side is under backpressure or errors, the rejection goes unhandled. Consider awaiting or chaining.catch(...)on each write.Also applies to: 152-152, 198-198, 240-240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` at line 107, The calls to writer.write(msg) in the onMessage callbacks are not awaited, so backpressure/errors produce unhandled rejections; update each onMessage handler (where writer.write(msg) is invoked) to await the promise and wrap it in try/catch (or chain .catch()) to handle write failures and log/cleanup appropriately — e.g., replace bare writer.write(msg) with await writer.write(msg) inside an async handler or await writer.ready then try { await writer.write(msg) } catch (err) { /* log/close writer */ } and apply the same change to all occurrences of writer.write(msg).cloud/quick-silver/cf/qs-room-evento.ts (1)
75-79: Silent error swallowing in the forward handler leaves the client without feedback.When
storeWs.sendfails, the error is logged but noQSResErris sent back to the client. The client's pending request will hang until its timeout fires. Consider sending an error response so the client can fail fast.Suggested improvement
try { storeWs.send(ctx.request); } catch (e) { console.log("[QSRoom] send to store failed (client may have disconnected):", e); + await ctx.send.send(ctx, { + type: "QSResErr", + tid: (ctx.validated as { tid: string }).tid, + arg: 0, + error: `store send failed: ${String(e)}`, + } satisfies QSResErr); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room-evento.ts` around lines 75 - 79, The try/catch around storeWs.send currently only logs the error and leaves the requester hanging; update the catch to send a QSResErr back to the original requester so they receive immediate failure. In the catch block for storeWs.send in qs-room-evento.ts, build and send a QSResErr response (using the same request id/context) to the client-facing socket or response helper used elsewhere (e.g., the same path that normally sends successful responses), include the error message and an appropriate error code, and then rethrow or return to stop further processing; reference storeWs.send and the QSResErr response type when locating where to add the send.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 290-298: The close() method currently only closes the WebSocket
but leaves the resolved QSApiImpl cached in KeyedResolvOnce, causing future
QSApi() calls to return a dead instance; update close() to also evict the
corresponding entry from the connections cache (the shared Map/KeyedResolvOnce
that stores resolved QSApiImpls) so subsequent QSApi(opts) will recreate a fresh
QSApiImpl, or alternatively modify connect() to detect a closed ws (check
ws.readyState) and recreate/replace the QSApiImpl in the connections cache;
target the close() implementation in QSApiImpl.close() and the
connections/KeyedResolvOnce cache handling to remove or replace the cached value
when closing/reconnecting.
- Around line 50-54: The connection cache uses KeyedResolvOnce keyed only by
opts.url so QSApi(QSApiOpts) can return an instance with the wrong db/auth;
change the key used in connections.get(...) to a composite fingerprint of the
full options (e.g., include opts.url, opts.db and a stable representation of
opts.auth) so each distinct (url, db, auth) tuple maps to its own QSApiImpl
instance; update the call site in QSApi (the connections.get(...) invocation) to
compute and use that composite key (or serialize the relevant fields of
QSApiOpts) rather than only opts.url.
In `@cloud/quick-silver/cf/qs-db-evento.ts`:
- Around line 32-38: The PUT handler currently calls sql.exec with arguments
(req.key, req.key, req.data) which writes the same value into both id and cid;
change the handler so cid is correct by either (A) adding a cid property to the
request type (QSReqPut) and using req.cid as the second argument to sql.exec, or
(B) computing the content identifier from req.data (e.g., hash the serialized
req.data into a CID) and passing that computed cid as the second argument;
update the insert call in the PUT handler (the sql.exec invocation) and any
related types so QSQueryRowMeta continues to surface distinct id and cid values.
- Around line 96-121: The wildcard (EventoType.WildCard) and error
(EventoType.Error) handlers currently hardcode tid:"unknown" and arg:0; update
both handlers to extract tid and arg from ctx.enRequest (e.g., const { tid, arg
} = ctx.enRequest ?? {}) and use those values when building the QSResErr sent
via ctx.send.send so the client can correlate responses; keep the fallback
values (e.g., "unknown" and 0) only if ctx.enRequest.tid/arg are missing, and
ensure the object still satisfies QSResErr.
In `@cloud/quick-silver/cf/qs-db-store.ts`:
- Around line 47-49: The webSocketClose method passes the raw close code to
ws.close which will throw if code is 1006 (reserved); normalize 1006 to 1000
before calling ws.close. Update webSocketClose(ws: WebSocket, code: number,
reason: string) to map if (code === 1006) code = 1000 (same behavior as QSRoom's
mapping) and then call ws.close(code, reason) so abnormal disconnects don't
cause an invalid close frame to be sent.
In `@cloud/quick-silver/cf/qs-encoder.ts`:
- Around line 13-17: The encode method currently swallows CBOR decode failures
by returning Result.Ok() when decoded.isErr() is true; update the error path in
async encode(args: ArrayBuffer): Promise<Result<unknown>> to propagate the
decode error instead of returning a successful empty result — use the error from
this.sthis.ende.cbor.decodeUint8 (the decoded variable) and return a
Result.Err(...) (or at minimum log the error then return
Result.Err(decoded.Err())) so callers receive the failure; ensure the successful
path still returns Result.Ok(decoded.Ok()).
In `@cloud/quick-silver/cf/qs-room.ts`:
- Around line 97-106: The webSocketClose function currently closes and clears
this.stores for all clients; change it to only tear down resources tied to the
disconnecting client (webSocketClose). Instead of iterating this.stores and
calling storeWs.close for every entry then this.stores.clear(), remove only the
client's subscriptions and pending tids (e.g., pendingTids, subscriptions or
client-specific store refs) and decrement reference counts for shared store
WebSockets; only call storeWs.close when its reference count reaches zero.
Update or add a mapping such as client→Set(storeId) or a refCount map for
this.stores and modify webSocketClose to iterate that client-specific set to
decrement/close accordingly, then remove the client from the client→stores map
and finally close the given ws with safeCode.
- Around line 74-95: getStoreWs currently installs a single message handler that
always uses the first call's clientWs, causing responses to be routed to the
wrong client; change the design to maintain a per-db store WebSocket plus a
tid→clientWs Map (or WeakMap) so the store message handler (installed in
getStoreWs) looks up the transaction id (tid) from each incoming message and
forwards non-subscribe responses to the correct client WebSocket instead of the
captured clientWs; ensure the forward path in qs-room-evento.ts registers
tid→clientWs before sending requests so replies can be routed, and remove the
unsafe non-null assertions in getStoreWs (e.g., the uses of
this.env.QS_DB_STORE.get(...) result and the final `!` return) by adding runtime
guards that log and throw/return on failure.
- Line 20: The LRUMap 'stores' currently evicts entries without closing their
WebSocket objects, leaking connections; register an eviction handler on the
LRUMap instance by calling its onDelete callback (on the stores field used in
qs-room) and close the evicted WebSocket inside that callback (wrap close in a
try/catch to ignore already-closed errors) so evicted connections are properly
cleaned up.
In `@core/quick-silver/quick-silver.ts`:
- Around line 128-138: The mapped destructured parameter "filename" in the
Promise.all mapping over Object.entries(raw._files ?? {}) is unused; rename it
to indicate intentional non-use (e.g., "_filename" or "_") in the mapping
callback so linters and readers know it's unused—update the map callback
signature (the function that returns QCFile objects and uses file.name, payload,
hashBlobAsync, etc.) to use the new underscore-prefixed parameter while leaving
all other logic (creating QCFile, computing cid, payload) unchanged.
In `@core/runtime/utils.ts`:
- Around line 618-622: The type annotation for the parameter of hashBlobAsync is
using the wrong identifier ToUint8; change it to the correctly imported ToUInt8
to match other usages in this file (and the import from
`@fireproof/core-types-base`). Update the function signature export async function
hashBlobAsync(uint8: ToUInt8): Promise<string> so the parameter type aligns with
top_uint8 and other helpers like coerceIntoUint8; no other changes to the
implementation using top_uint8, sha256, CID.create, or json.code are needed.
---
Duplicate comments:
In `@core/quick-silver/fireproof.test.ts`:
- Around line 170-180: The test creates a singleton via fireproof("close-test")
but never calls destroy(), so the singleton and its onClosed listener remain
registered; update the "close" test (the describe block containing it and the it
block using fireproof("close-test")) to call db.destroy() after awaiting
db.close() (or in a finally/afterEach) to ensure the singleton is cleaned up and
listeners are removed; reference the fireproof function, the "close-test"
identifier and the db.destroy() method when applying the change.
- Around line 7-12: The test creates a singleton via fireproof("db") but never
cleans it up; add an afterEach that destroys or removes the singleton so it
doesn't leak into other tests — e.g. keep references to the created instance(s)
(db1/db2) in the test scope and add afterEach(() => db1?.destroy?.() ?? /*
fallback remove from internal map if no destroy */ (fireproof as
any)._instances?.delete("db")); ensure you call the instance destroy method if
available (destroy) or otherwise remove the "db" entry from the singleton
storage after each test.
In `@core/quick-silver/quick-silver.ts`:
- Around line 24-29: Remove the unused URI import and consolidate duplicate
imports from `@fireproof/core-types-base`: replace the separate PARAM and
NotFoundError imports with a single import that includes both symbols, and
remove URI from the import list at the top of quick-silver.ts so only used
symbols (BuildURI, Lazy, Logger, OnFunc, stripper, PARAM, NotFoundError, etc.)
remain.
- Around line 151-158: The map passed to Promise.allSettled can throw
synchronously because this.sthis.ende.cbor.encodeToUint8(envelope) may throw;
wrap the encode and put call in a Promise-returning block so any synchronous
errors become rejected promises instead of escaping (e.g., inside
flatEnvelopes.map produce Promise.resolve().then(() => { const key =
isQCDoc(envelope)? envelope._.id : envelope._.cid; const url =
BuildURI.from(this._baseURL()).setParam(PARAM.KEY, key).URI(); const bytes =
this.sthis.ende.cbor.encodeToUint8(envelope); return this._gateway.put(url,
bytes, this.sthis); })), ensuring all references (flatEnvelopes, isQCDoc,
BuildURI.from, PARAM.KEY, this.sthis.ende.cbor.encodeToUint8, this._gateway.put)
are used unchanged so Promise.allSettled receives only promises.
- Around line 83-110: The get<T extends DocTypes> method currently re-throws raw
gateway errors (e.g., from result.Err() and fileResult.Err()) which prevents
callers from detecting missing-key cases via instanceof NotFoundError; change
the error handling after each this._gateway.get call (the result and fileResult
branches inside get) to inspect the gateway error and, if it represents a
404/missing-key (e.g., a status/code of 404 or a gateway-specific NotFound
indicator), throw a new NotFoundError(`not found: ${id}`) or NotFoundError(`not
a file: ${cid}`) respectively, otherwise re-throw the original error; do this
for both the initial result.isErr() and the fileResult.isErr() checks so missing
keys are converted to NotFoundError while other gateway errors remain unchanged.
---
Nitpick comments:
In `@cloud/quick-silver/api/cli.ts`:
- Around line 88-93: The JSON.parse call inside the mapping that builds ops from
args.pkg can throw a raw SyntaxError for malformed JSON; wrap the
JSON.parse(pair.slice(comma + 1)) in a try-catch inside the map (the block
creating const ops: QSPut[]), and on error throw a new Error that includes the
offending pair or key and a user-friendly message (e.g., `Invalid JSON for --pkg
"<pair>"`), preserving the original error message for debugging; ensure you
still pass the parsed object to sthis.ende.cbor.encodeToUint8 when successful.
- Around line 128-147: The cleanup function currently closes both handle and api
and calls process.exit(), causing api.close() to be invoked twice; change
cleanup (used by process.on handlers) so it only closes the subscription handle
(call handle.close() only) and does not call api.close() or process.exit(); keep
the existing for await (const r of handle.events) loop and the post-loop await
api.close() + process.exit handling so normal shutdown after the stream ends
performs the API close and exit once.
In `@cloud/quick-silver/api/package.json`:
- Around line 1-12: Add a minimal build script and an exports entry to
package.json so workspace tooling and consumers can find the compiled output:
update the package.json to include a "scripts" object with at least a "build"
command (e.g., invoking your TypeScript or bundler step) and add an "exports"
(or "bin" for a CLI) field that points to the compiled entry (e.g., main/dist
file) so tools that iterate workspaces and consumers can locate the package
entrypoint; target the package.json keys in this file (the "scripts" and
"exports"/"bin" fields) when making the change.
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 71-78: The CBOR decode failure in the ws.onmessage handler
currently drops messages when decoded.isErr() is true; update the handler to log
the decode error and some context before returning. Specifically, after calling
sthis.ende.cbor.decodeUint8<QSMsg>(...) and when decoded.isErr() is true,
capture the error (decoded.Err() or equivalent), and emit a diagnostic via your
logging facility (e.g., this.logger.error or console.error) including the error,
the raw evt.data size/preview, and a short note that the message was not
processed; keep the existing early return and leave the successful path
(decoded.Ok() → isQSMsg → this.onMessage.invoke(msg)) unchanged.
- Around line 84-91: The withTimeout function can race with the connect()
rejection and call writer.abort twice; fix by adding a shared abort guard (e.g.,
an aborted boolean or an AbortController) that's closed when any path aborts so
further abort calls are no-ops: update the connect() catch path that calls
writer.abort(e) and done.reject(e) to set aborted = true (or call
controller.abort()) before aborting, and modify withTimeout (inside
timeouted.then) to check aborted before calling writer.abort(...) and to avoid
calling unreg() twice; reference symbols: withTimeout, connect, done, writer,
unreg, timeouted, DEFAULT_TIMEOUT_MS.
- Line 107: The calls to writer.write(msg) in the onMessage callbacks are not
awaited, so backpressure/errors produce unhandled rejections; update each
onMessage handler (where writer.write(msg) is invoked) to await the promise and
wrap it in try/catch (or chain .catch()) to handle write failures and
log/cleanup appropriately — e.g., replace bare writer.write(msg) with await
writer.write(msg) inside an async handler or await writer.ready then try { await
writer.write(msg) } catch (err) { /* log/close writer */ } and apply the same
change to all occurrences of writer.write(msg).
In `@cloud/quick-silver/cf/qs-db-evento.ts`:
- Around line 79-81: The query handler currently does a full table scan via
sql.exec(...) selecting all rows from docs (SELECT id, cid, data, synced FROM
docs) which can OOM; modify the handler around the cursor creation to use a safe
paginated/batched query (add a LIMIT and OFFSET or a WHERE filter param) and
iterate in chunks so large binary data is streamed in manageable batches; ensure
sql.exec is called with parameterized LIMIT/OFFSET or a cursor-based predicate
(e.g., last_id) and update the loop that processes the returned cursor to fetch
successive pages rather than loading all rows at once.
In `@cloud/quick-silver/cf/qs-encoder.ts`:
- Around line 19-21: The decode method currently calls JSON.stringify(data)
directly which can throw on BigInt or circular refs; update qs-encoder.ts's
decode(data: unknown): Promise<Result<string>> to wrap JSON.stringify in
try/catch, returning Result.Ok(string) on success and Result.Err with a
descriptive error (or error.message) on failure; ensure you catch thrown values
and convert them to a string for the Result.Err so callers of decode can handle
serialization failures safely.
In `@cloud/quick-silver/cf/qs-room-evento.ts`:
- Around line 75-79: The try/catch around storeWs.send currently only logs the
error and leaves the requester hanging; update the catch to send a QSResErr back
to the original requester so they receive immediate failure. In the catch block
for storeWs.send in qs-room-evento.ts, build and send a QSResErr response (using
the same request id/context) to the client-facing socket or response helper used
elsewhere (e.g., the same path that normally sends successful responses),
include the error message and an appropriate error code, and then rethrow or
return to stop further processing; reference storeWs.send and the QSResErr
response type when locating where to add the send.
In `@cloud/quick-silver/cf/qs-send-provider.ts`:
- Around line 16-19: The send method currently calls
this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)) without
guarding for a closed socket; wrap that call (including the encode step) in a
try/catch inside async send<T>(_ctx: HandleTriggerCtx<ArrayBuffer, unknown,
unknown>, res: unknown): Promise<Result<T>> so synchronous throws from ws.send
are caught and you return Result.Err with the error, otherwise return
Result.Ok(res as T); reference the send method, this.ws.send,
this.sthis.ende.cbor.encodeToUint8, QSOpRes and the Result.Ok/Result.Err paths
when making the change.
In `@cloud/quick-silver/types/index.ts`:
- Around line 35-37: QSQuery is defined as an empty schema (type({})), so the
generated type guard isQSQuery will match any non-null object and provide no
validation; update QSQuery (and its declaration) to include the intended
properties/structure for queries (e.g., optional filter/predicate fields or a
discriminant like { kind: 'query', ... } ) or replace it with a more specific
union/record type so isQSQuery can meaningfully discriminate inputs; locate the
QSQuery declaration and any uses of isQSQuery to add the proper fields/optional
flags or a discriminant property and adjust dependent validation/routing
accordingly.
In `@core/quick-silver/envelope.ts`:
- Around line 38-42: The QCFile schema uses type.instanceOf(Uint8Array) for
payload which can fail when CBOR/transport yields an ArrayBuffer or other
ArrayBufferView; update the QCFile payload validator to accept ArrayBuffer and
ArrayBufferView (or a union of ArrayBuffer and Uint8Array) and ensure the
decoded value is normalized into a Uint8Array (coerce via new Uint8Array(value)
or similar) before further processing; target the QCFile declaration and its
payload validator (and any deserialization/normalization path that consumes
QCFile) so consumers always get a Uint8Array even if the input was an
ArrayBuffer or other typed-array subclass.
In `@core/quick-silver/fireproof.test.ts`:
- Around line 142-144: The test currently only asserts that something is thrown
when calling db.get("missing"); tighten this to assert the specific
NotFoundError type to prevent regressions: update the assertion on
db.get("missing") to expect a NotFoundError (for example using Jest's
rejects.toThrowError(NotFoundError) or rejects.toBeInstanceOf(NotFoundError)) so
the test verifies db.get(...) throws the correct NotFoundError class rather than
any error.
In `@core/quick-silver/quick-silver.ts`:
- Around line 189-199: Replace the ad-hoc throws in methods changes, allDocs,
allDocuments (and any other stubs like query) with a shared sentinel so runtime
errors include context: add a helper function notImplemented(methodName: string)
or a NotImplementedError class and call it from each stub (e.g., throw
notImplemented("changes")) so stack traces and logs indicate which method is
unimplemented; update all occurrences of new Error("not implemented") in
QuickSilver class to use this helper and include the method name and optional
guidance text.
- Line 109: In get(), the object spread of envelope.data currently follows the
explicit _id which allows a _id inside data to overwrite envelope._.id; change
the return construction so the spread of (envelope.data as T) comes first and
the explicit _id: envelope._.id is placed after it (keeping _: { ...envelope._,
fileRefs: docFile } as-is) so the explicit envelope id is authoritative; also
ensure this aligns with existing bulk()/stripper behavior but protects against
future external data sources providing a conflicting _id.
| const connections = new KeyedResolvOnce<QSApiImpl>(); | ||
|
|
||
| export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> { | ||
| return connections.get(opts.url).once(() => new QSApiImpl(opts)); | ||
| } |
There was a problem hiding this comment.
Connection cache keyed only by URL ignores db and auth — second caller with a different db gets the wrong instance.
KeyedResolvOnce is keyed solely on opts.url. If two callers use the same WebSocket endpoint but different db or auth values, the second caller silently receives the first caller's QSApiImpl (with the wrong db/auth). This is a correctness bug for any multi-database scenario.
Consider keying on url + db (or the full opts fingerprint), or restructuring so that db/auth are per-request rather than per-instance.
Quick fix — composite key
export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> {
- return connections.get(opts.url).once(() => new QSApiImpl(opts));
+ const cacheKey = `${opts.url}::${opts.db}`;
+ return connections.get(cacheKey).once(() => new QSApiImpl(opts));
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/qs-api.ts` around lines 50 - 54, The connection cache
uses KeyedResolvOnce keyed only by opts.url so QSApi(QSApiOpts) can return an
instance with the wrong db/auth; change the key used in connections.get(...) to
a composite fingerprint of the full options (e.g., include opts.url, opts.db and
a stable representation of opts.auth) so each distinct (url, db, auth) tuple
maps to its own QSApiImpl instance; update the call site in QSApi (the
connections.get(...) invocation) to compute and use that composite key (or
serialize the relevant fields of QSApiOpts) rather than only opts.url.
| close(): Promise<void> { | ||
| return this.connect() | ||
| .then((ws) => { | ||
| ws.close(); | ||
| }) | ||
| .catch(() => { | ||
| /* ignore */ | ||
| }); | ||
| } |
There was a problem hiding this comment.
close() doesn't evict from the connections cache — subsequent QSApi() calls return a dead instance.
After close(), the KeyedResolvOnce still holds the resolved (now closed) QSApiImpl. Any future QSApi(opts) call with the same key will return this stale instance whose WebSocket is already closed.
Either clear the cache entry in close(), or add reconnect logic in connect.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/qs-api.ts` around lines 290 - 298, The close() method
currently only closes the WebSocket but leaves the resolved QSApiImpl cached in
KeyedResolvOnce, causing future QSApi() calls to return a dead instance; update
close() to also evict the corresponding entry from the connections cache (the
shared Map/KeyedResolvOnce that stores resolved QSApiImpls) so subsequent
QSApi(opts) will recreate a fresh QSApiImpl, or alternatively modify connect()
to detect a closed ws (check ws.readyState) and recreate/replace the QSApiImpl
in the connections cache; target the close() implementation in QSApiImpl.close()
and the connections/KeyedResolvOnce cache handling to remove or replace the
cached value when closing/reconnecting.
| sql.exec( | ||
| `INSERT INTO docs (id, cid, data) VALUES (?, ?, ?) | ||
| ON CONFLICT(id) DO UPDATE SET cid = excluded.cid, data = excluded.data`, | ||
| req.key, | ||
| req.key, | ||
| req.data, | ||
| ); |
There was a problem hiding this comment.
PUT handler sets cid to req.key — both id and cid get the same value.
The SQL insert uses req.key for both the id and cid columns (arguments: req.key, req.key, req.data). The cid column is presumably meant to hold a content hash (content identifier), not a copy of the document key. The query handler (line 89) surfaces both id and cid as distinct fields in QSQueryRowMeta, so downstream consumers will see id === cid for every row.
Either the QSReqPut type should carry a separate cid field, or the handler should compute the CID from req.data.
Illustrative fix — if CID should be computed
handle: async (ctx) => {
const req = ctx.validated as QSReqPut;
const sql = handlerSql(ctx);
+ // Compute CID from data (requires importing hashBlobAsync or equivalent)
+ const cid = await hashBlobAsync(req.data);
sql.exec(
`INSERT INTO docs (id, cid, data) VALUES (?, ?, ?)
ON CONFLICT(id) DO UPDATE SET cid = excluded.cid, data = excluded.data`,
req.key,
- req.key,
+ cid,
req.data,
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-db-evento.ts` around lines 32 - 38, The PUT handler
currently calls sql.exec with arguments (req.key, req.key, req.data) which
writes the same value into both id and cid; change the handler so cid is correct
by either (A) adding a cid property to the request type (QSReqPut) and using
req.cid as the second argument to sql.exec, or (B) computing the content
identifier from req.data (e.g., hash the serialized req.data into a CID) and
passing that computed cid as the second argument; update the insert call in the
PUT handler (the sql.exec invocation) and any related types so QSQueryRowMeta
continues to surface distinct id and cid values.
| { | ||
| type: EventoType.WildCard, | ||
| hash: "qs-unknown", | ||
| handle: async (ctx) => { | ||
| await ctx.send.send(ctx, { | ||
| type: "QSResErr", | ||
| tid: "unknown", | ||
| arg: 0, | ||
| error: `unknown request: ${JSON.stringify(ctx.enRequest)}`, | ||
| } satisfies QSResErr); | ||
| return Result.Ok(EventoResult.Continue); | ||
| }, | ||
| }, | ||
| { | ||
| type: EventoType.Error, | ||
| hash: "qs-error", | ||
| handle: async (ctx) => { | ||
| await ctx.send.send(ctx, { | ||
| type: "QSResErr", | ||
| tid: "unknown", | ||
| arg: 0, | ||
| error: ctx.error?.message ?? "internal error", | ||
| } satisfies QSResErr); | ||
| return Result.Ok(EventoResult.Continue); | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Wildcard and error handlers lose the original request's tid and arg.
Both the unknown-request handler (line 102) and the error handler (line 115) hardcode tid: "unknown" and arg: 0. If the original request contained a valid tid/arg, the client won't be able to correlate the error response back to its pending request, potentially causing the client to hang waiting for a response that will never arrive with the expected tid.
Consider extracting tid/arg from ctx.enRequest when available:
Proposed improvement
hash: "qs-unknown",
handle: async (ctx) => {
+ const req = ctx.enRequest as { tid?: string; arg?: number };
await ctx.send.send(ctx, {
type: "QSResErr",
- tid: "unknown",
- arg: 0,
+ tid: req?.tid ?? "unknown",
+ arg: req?.arg ?? 0,
error: `unknown request: ${JSON.stringify(ctx.enRequest)}`,
} satisfies QSResErr);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-db-evento.ts` around lines 96 - 121, The wildcard
(EventoType.WildCard) and error (EventoType.Error) handlers currently hardcode
tid:"unknown" and arg:0; update both handlers to extract tid and arg from
ctx.enRequest (e.g., const { tid, arg } = ctx.enRequest ?? {}) and use those
values when building the QSResErr sent via ctx.send.send so the client can
correlate responses; keep the fallback values (e.g., "unknown" and 0) only if
ctx.enRequest.tid/arg are missing, and ensure the object still satisfies
QSResErr.
| webSocketClose(ws: WebSocket, code: number, reason: string): void { | ||
| ws.close(code, reason); | ||
| } |
There was a problem hiding this comment.
webSocketClose doesn't normalize code 1006 — will throw on abnormal drops.
Code 1006 is a reserved WebSocket close code that must not be sent in a close() frame. The CF runtime synthesizes it for abnormal disconnects. QSRoom already handles this by mapping 1006 → 1000 (line 99 in qs-room.ts), but QSDBStore passes the raw code through, which will throw.
Align with QSRoom's handling
webSocketClose(ws: WebSocket, code: number, reason: string): void {
- ws.close(code, reason);
+ const safeCode = code === 1006 ? 1000 : code;
+ ws.close(safeCode, reason);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| webSocketClose(ws: WebSocket, code: number, reason: string): void { | |
| ws.close(code, reason); | |
| } | |
| webSocketClose(ws: WebSocket, code: number, reason: string): void { | |
| const safeCode = code === 1006 ? 1000 : code; | |
| ws.close(safeCode, reason); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-db-store.ts` around lines 47 - 49, The
webSocketClose method passes the raw close code to ws.close which will throw if
code is 1006 (reserved); normalize 1006 to 1000 before calling ws.close. Update
webSocketClose(ws: WebSocket, code: number, reason: string) to map if (code ===
1006) code = 1000 (same behavior as QSRoom's mapping) and then call
ws.close(code, reason) so abnormal disconnects don't cause an invalid close
frame to be sent.
|
|
||
| export class QSRoom extends DurableObject<Env> implements QSRoomDO { | ||
| private readonly sthis; | ||
| private readonly stores = new LRUMap<string, WebSocket>({ maxEntries: 64 }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
@adviser/cement LRUMap eviction callback API documentation
💡 Result:
In @adviser/cement, the “eviction callback” for LRUMap is exposed as a delete event listener:
- Register eviction/deletion callback:
LRUMap.prototype.onDelete(fn)wherefnis anLRUMapFnand the method returns an unregister function (UnregFn). The docs show it as the way to “listen for evictions”. [1] - Callback signature:
type LRUMapFn<K, T> = (value: T, key: K, meta: LRUCtx<K, T>) => void. [2] - Eviction policy hook (optional): pass
evictin the constructor params (or viasetParam) to control when eviction happens:evict: (param: LRUParam<T, K>, newItem: T, map: LRUMap<K, T>) => boolean. [2] - Refresh/eviction execution (optional): pass
refreshto control how entries are removed when params change / capacity is exceeded; default refresh deletes least-recently-accessed keys untilmaxEntriesis satisfied. [2]
Example (from the docs):
const cache = new LRUMap<string, number>({ maxEntries: 100 });
const unregister = cache.onDelete((key, value) => {
console.log(`Evicted: ${key} = ${value}`);
});[1]
Sources: JSR API docs for LRUMap / onDelete [1] and the published module source (src/lru-map-set.ts) defining MutableLRUParam, LRUMapFn, and eviction hooks [2].
References:
[1] (jsr.io)
[2] (jsr.io)
Citations:
- 1: https://jsr.io/%40adviser/cement/doc/~/LRUMap
- 2: https://jsr.io/%40adviser/cement/0.5.23/src/lru-map-set.ts
🏁 Script executed:
# Find the file and examine the code context
find . -name "qs-room.ts" -type fRepository: fireproof-storage/fireproof
Length of output: 105
🏁 Script executed:
# Examine the qs-room.ts file to see the full context around line 20
head -50 cloud/quick-silver/cf/qs-room.tsRepository: fireproof-storage/fireproof
Length of output: 2138
🏁 Script executed:
# Look for any cleanup or onDelete usage related to stores
rg -A 5 -B 5 "stores" cloud/quick-silver/cf/qs-room.tsRepository: fireproof-storage/fireproof
Length of output: 1391
🏁 Script executed:
# Check if onDelete or any eviction cleanup is used anywhere in the codebase for WebSocket stores
rg "onDelete|\.close\(\)" cloud/quick-silver/cf/qs-room.tsRepository: fireproof-storage/fireproof
Length of output: 53
Register an onDelete callback to close evicted WebSocket connections.
The LRUMap evicts entries without calling close() on the corresponding WebSocket, leaking connections as the cache fills. The @adviser/cement LRUMap supports the onDelete() method to handle evictions:
this.stores.onDelete((ws) => {
try { ws.close(); } catch { /* already closed */ }
});Register this callback during construction or initialization to ensure evicted WebSockets are properly closed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-room.ts` at line 20, The LRUMap 'stores' currently
evicts entries without closing their WebSocket objects, leaking connections;
register an eviction handler on the LRUMap instance by calling its onDelete
callback (on the stores field used in qs-room) and close the evicted WebSocket
inside that callback (wrap close in a try/catch to ignore already-closed errors)
so evicted connections are properly cleaned up.
| async getStoreWs(db: string, clientWs: WebSocket): Promise<WebSocket> { | ||
| return (await this.stores.getSet(db, async () => { | ||
| const id = this.env.QS_DB_STORE.idFromName(db); | ||
| const stub = this.env.QS_DB_STORE.get(id); | ||
| const res = await stub.fetch("https://internal/ws", { | ||
| headers: { Upgrade: "websocket" }, | ||
| }); | ||
| const storeWs = res.webSocket!; | ||
| storeWs.accept(); | ||
| storeWs.addEventListener("message", (evt) => { | ||
| const decoded = this.sthis.ende.cbor.decodeUint8<unknown>(new Uint8Array(evt.data as ArrayBuffer)); | ||
| if (decoded.isErr()) return; | ||
| const msg = decoded.Ok(); | ||
| if (isQSEvtSubscribe(msg)) { | ||
| this.dispatchSubscribeEvent(db, msg); | ||
| } else { | ||
| clientWs.send(evt.data); | ||
| } | ||
| }); | ||
| return storeWs; | ||
| }))!; | ||
| } |
There was a problem hiding this comment.
Cached store WebSocket captures the first caller's clientWs — responses for other clients are mis-routed.
getStoreWs caches the store WebSocket per db, but the message event handler at line 90 always sends non-subscribe responses to the clientWs that was passed during the first call. When a second client issues a get/put/query for the same db, the response from the store is routed to the first client, not the requesting one.
This is a correctness bug in any multi-client room scenario. The store connection's message handler needs to route responses back to the correct client, e.g., by maintaining a tid → clientWs map.
Additionally, the ! non-null assertions flagged by ESLint (line 81 and the outer expression at line 94) could be replaced with runtime guards.
Sketch — tid-based routing
+ // Maps pending request tids to the client that issued them
+ private readonly pendingClients = new Map<string, WebSocket>();
async getStoreWs(db: string, clientWs: WebSocket): Promise<WebSocket> {
- return (await this.stores.getSet(db, async () => {
+ const storeWs = await this.stores.getSet(db, async () => {
const id = this.env.QS_DB_STORE.idFromName(db);
const stub = this.env.QS_DB_STORE.get(id);
const res = await stub.fetch("https://internal/ws", {
headers: { Upgrade: "websocket" },
});
- const storeWs = res.webSocket!;
+ const storeWs = res.webSocket;
+ if (!storeWs) throw new Error("store did not return a WebSocket");
storeWs.accept();
storeWs.addEventListener("message", (evt) => {
const decoded = this.sthis.ende.cbor.decodeUint8<unknown>(new Uint8Array(evt.data as ArrayBuffer));
if (decoded.isErr()) return;
const msg = decoded.Ok();
if (isQSEvtSubscribe(msg)) {
this.dispatchSubscribeEvent(db, msg);
} else {
- clientWs.send(evt.data);
+ const tid = (msg as { tid?: string }).tid;
+ const target = tid ? this.pendingClients.get(tid) : undefined;
+ if (target) {
+ try { target.send(evt.data); } catch { /* client gone */ }
+ this.pendingClients.delete(tid!);
+ }
}
});
return storeWs;
- }))!;
+ });
+ return storeWs!;
}The forward handler in qs-room-evento.ts would also need to register the tid → clientWs mapping before sending.
🧰 Tools
🪛 ESLint
[error] 75-94: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
[error] 81-81: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-room.ts` around lines 74 - 95, getStoreWs currently
installs a single message handler that always uses the first call's clientWs,
causing responses to be routed to the wrong client; change the design to
maintain a per-db store WebSocket plus a tid→clientWs Map (or WeakMap) so the
store message handler (installed in getStoreWs) looks up the transaction id
(tid) from each incoming message and forwards non-subscribe responses to the
correct client WebSocket instead of the captured clientWs; ensure the forward
path in qs-room-evento.ts registers tid→clientWs before sending requests so
replies can be routed, and remove the unsafe non-null assertions in getStoreWs
(e.g., the uses of this.env.QS_DB_STORE.get(...) result and the final `!`
return) by adding runtime guards that log and throw/return on failure.
| webSocketClose(ws: WebSocket, code: number, reason: string): void { | ||
| // 1006 is synthesized by the runtime for abnormal drops — not a valid close() code | ||
| const safeCode = code === 1006 ? 1000 : code; | ||
| console.log("[QSRoom] websocket closed:", code, reason); | ||
| this.stores.forEach((storeWs) => { | ||
| try { storeWs.close(safeCode, reason); } catch { /* already closed */ } | ||
| }); | ||
| this.stores.clear(); | ||
| try { ws.close(safeCode, reason); } catch { /* already closed */ } | ||
| } |
There was a problem hiding this comment.
webSocketClose destroys all store connections when any single client disconnects.
Lines 101-104 iterate and close every cached store WebSocket, then clear the entire store map. In a multi-client room, one client disconnecting kills the store connections for all other clients.
Consider only cleaning up resources associated with the disconnecting client (e.g., their subscriptions and pending tids), rather than tearing down shared store connections.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-room.ts` around lines 97 - 106, The webSocketClose
function currently closes and clears this.stores for all clients; change it to
only tear down resources tied to the disconnecting client (webSocketClose).
Instead of iterating this.stores and calling storeWs.close for every entry then
this.stores.clear(), remove only the client's subscriptions and pending tids
(e.g., pendingTids, subscriptions or client-specific store refs) and decrement
reference counts for shared store WebSockets; only call storeWs.close when its
reference count reaches zero. Update or add a mapping such as
client→Set(storeId) or a refCount map for this.stores and modify webSocketClose
to iterate that client-specific set to decrement/close accordingly, then remove
the client from the client→stores map and finally close the given ws with
safeCode.
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cloud/3rd-party/src/App.tsx (1)
40-44:⚠️ Potential issue | 🟡 Minor
attachstate/token UI references are now inert.With
databaseFactoryreplacing theattach: toCloud(…)config,attach.stateandattach.ctx.tokenAndClaimswill likely return a default/noop state. The rendered token string and error message on lines 42–43 will always be empty. If this is intentional scaffolding for later, a brief comment would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/3rd-party/src/App.tsx` around lines 40 - 44, The UI is rendering inert attach state/token fields because attach (previously from toCloud) is now replaced by databaseFactory and will return a noop/default; update the UI to either remove these stale references or wire them to the new API: replace uses of attach.state and attach.ctx.tokenAndClaims with the new databaseFactory-derived status/token accessor (or a proper hook that exposes connection state), or if you intend to keep placeholders, add a clear comment above the JSX noting this is deliberate scaffolding; locate the JSX that references attach (attach.state, attach.ctx.tokenAndClaims.tokenAndClaims.token) and change it to use the new connection/status functions or remove/comment the block accordingly.
🧹 Nitpick comments (14)
core/quick-silver/cid-storage/cid-storage.test.ts (3)
65-65: Unnecessary backend instantiation just forname
make().namecreates a new backend instance (potentially opening a new Dexie/OPFS handle) solely to read thenameproperty for constructing a fake URL. Extract the name as a constant or use a static property instead.Suggested fix
it("get returns found=false for unknown url", async () => { - const result = await svc.get(`${make().name}://?cid=badfakecid123`); + const result = await svc.get("dexie://?cid=badfakecid123");Or better, store the backend name alongside
makein thebackendsarray and reference it directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/cid-storage.test.ts` at line 65, Avoid instantiating a backend just to read its name: stop calling make().name inside the test when constructing the fake URL; instead get the backend name from a constant or from the existing backends array (the same entry that provides make) and use that value when building the URL passed to svc.get. Update the test so it references the backend name (from the stored constant or backends[index].name) rather than creating a new instance via make().
70-82: Tests 4 and 5 are functionally identicalBoth "storing same bytes twice yields the same cid" and "cid is deterministic — same content always yields same cid" store the same content twice and assert CID equality. They test the exact same property. Consider removing one or differentiating them — e.g., test determinism across separate service instances to add meaningful coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/cid-storage.test.ts` around lines 70 - 82, These two tests in cid-storage.test.ts are duplicates; update the second test (currently titled "cid is deterministic — same content always yields same cid") to instead verify determinism across separate service instances by creating a new svc (or a fresh instance of the storage service), storing the same bytes via svc.store and newSvc.store (use streamFrom(bytes) for each), and assert r1.Ok().cid equals r2.Ok().cid; alternatively remove one of the two tests if you prefer to keep only one idempotency check. Ensure you reference svc.store and streamFrom(bytes) when locating the code to change.
33-34: Service created at module/describe scope — shared mutable state across tests
CIDStorageServiceis instantiated once perdescribeblock (line 34) and shared across all tests. Since backends persist data (Dexie to IndexedDB, OPFS to disk), earlier tests' stored data is visible to later tests. This currently works because each test uses unique content strings, but it's fragile — if a future test expects an empty store, it will break. Consider adding cleanup inafterEachorafterAll, or using unique DB names per test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/cid-storage.test.ts` around lines 33 - 34, CIDStorageService is being created once at the describe scope (describe.each/backends) as svc = CIDStorageService({ backends: [make()] }) causing shared mutable state across tests; change to create a fresh service per test (move instantiation into a beforeEach or inside each test using make()) or ensure storage is cleared between tests by adding an afterEach/afterAll cleanup that clears the backend (call the backend teardown/clear methods on svc or the object returned by make()); reference CIDStorageService, backends, make(), and svc to locate where to adjust instantiation or add cleanup.core/quick-silver/package.json (2)
1-52: Missing"types"and"exports"fields for TypeScript consumersThe package sets
"main": "./index.js"but lacks a"types"entry (e.g.,"types": "./index.d.ts") and a modern"exports"map. Downstream TypeScript consumers in this monorepo (and external consumers if ever published) may not resolve type declarations correctly without these.Suggested additions
"type": "module", "main": "./index.js", + "types": "./index.d.ts", "scripts": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/package.json` around lines 1 - 52, Add a "types" entry pointing to the package's type declarations (e.g., "types": "./index.d.ts") and add a modern "exports" map alongside the existing "main" to explicitly expose entrypoints for consumers (e.g., export the default entry for "." with both import and require or node/deno subpaths as needed). Update package.json keys "types", "exports" while keeping "main": "./index.js" to ensure TypeScript and modern ESM consumers resolve the package and its declarations correctly; ensure the exports map references the same "./index.js" and the types file path used in "types".
39-51: Inconsistent workspace version specifiersSome workspace dependencies use
workspace:*(e.g.,core-gateways-indexeddb,core-types-base) while others useworkspace:0.0.0(e.g.,core-runtime,core-cli). If this is intentional (e.g.,workspace:0.0.0pins to a specific published version whileworkspace:*always resolves locally), a brief comment in the repo or contributing guide would help future contributors understand the convention. Otherwise, consider standardizing on one form.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/package.json` around lines 39 - 51, The package.json has inconsistent workspace specifiers: some deps in "dependencies" use "workspace:*" (e.g., "@fireproof/core-gateways-indexeddb", "@fireproof/core-types-base") while others use "workspace:0.0.0" (e.g., "@fireproof/core-runtime" and in "devDependencies" "@fireproof/core-cli"); decide and standardize them (either change all workspace pins to "workspace:*" for local resolution or to a specific "workspace:VERSION" for pinned behavior) and update the entries in the dependencies and devDependencies sections of package.json accordingly, and if keeping mixed behavior add a short comment in the repo contributing docs explaining the convention.core/quick-silver/cid-storage/test-noble.html (1)
20-24: Unpinned CDN importsThe
esm.shimports have no version pins. If@noble/hashesormultiformatsrelease a breaking change, this test page could silently break. Consider pinning versions (e.g.,https://esm.sh/@noble/hashes@1.8.0/sha2) to match the versions declared inpackage.json.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/test-noble.html` around lines 20 - 24, The imports using esm.sh for sha256, CID, raw, and createDigest are unpinned and may break on upstream changes; update the module URLs to include explicit version pins matching package.json (for example change "https://esm.sh/@noble/hashes/sha2" to a versioned URL like "https://esm.sh/@noble/hashes@<version>/sha2" and likewise pin "multiformats@<version>/cid", "multiformats@<version>/codecs/raw", and "multiformats@<version>/hashes/digest") so the sha256, CID, raw, and create (createDigest) imports resolve to fixed versions.core/types/base/types.ts (1)
219-219:databaseFactoryreturnsDatabasesynchronously — verify this is intentionalThe factory signature
(name: string, opts: ConfigOpts) => Databaseis synchronous. If the factory might need async initialization (e.g., opening IndexedDB), this works only ifDatabaseimplementations defer initialization toready(). This appears consistent with the QuickSilver design (PR notesready = Lazy(...)memoized start), but worth confirming the contract is clear for future implementors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/types/base/types.ts` at line 219, The current factory type readonly databaseFactory?: (name: string, opts: ConfigOpts) => Database is synchronous but may force implementations to hide async init inside Database.ready; decide and make the contract explicit: either change the signature to readonly databaseFactory?: (name: string, opts: ConfigOpts) => Promise<Database> to support async construction, or keep the sync signature but add documentation/comments on the Database interface (and QuickSilver/ready memoization) stating implementations must defer async initialization to Database.ready; update types and comments accordingly and adjust any call sites that construct Databases to match the chosen contract.core/quick-silver/cid-storage/opfs.ts (1)
42-42: Suspicious double-castas unknown as ArrayBufferView<ArrayBuffer>.
Uint8Arrayis a valid argument toFileSystemWritableFileStream.write()per the spec (acceptsBufferSource). If this cast is needed only to satisfy TypeScript, a narrower assertion likeas BufferSourcewould be safer and communicate intent more clearly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/opfs.ts` at line 42, The double-cast on the write call (await writable.write(value as unknown as ArrayBufferView<ArrayBuffer>);) is overly broad; replace it with a single, accurate type or remove it if value is already a Uint8Array. Specifically, in the code that calls writable.write(...) (references: writable, write, value) change the assertion to use BufferSource (or no cast if value is already a BufferSource/Uint8Array) so the TypeScript type matches the FileSystemWritableFileStream.write() specification instead of using as unknown as ArrayBufferView<ArrayBuffer>.core/quick-silver/cid-storage/dexie.ts (1)
12-14:Lazysingleton ignores arguments after the first call.
DexieStorageBackend({ name: "other" })after an initialDexieStorageBackend()would silently return the first instance (named"cid-storage"). This is fine for the current single-call usage inCIDStorageService, but could surprise future callers. A brief JSDoc note on this factory's singleton semantics would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/dexie.ts` around lines 12 - 14, Add a JSDoc comment above the DexieStorageBackend export explaining that this factory wraps Lazy and therefore returns a singleton: the first invocation’s arguments (e.g., name) are the ones used and subsequent calls ignore their arguments and return the same instance; reference the Lazy wrapper, DexieStorageBackend export, and DexieStorageBackendImpl so readers know where this behavior originates and what to expect when calling DexieStorageBackend({ name: ... }).core/quick-silver/cid-storage/service.ts (1)
28-34: Singleton keyed on sorted backend names — note that name collisions across different backend types are possible.If two different backend implementations share the same
name(e.g., both named"cid-storage"), they'd produce the same cache key and the second would silently get the first'sCIDStorageServiceImpl. This is unlikely with the current Dexie/OPFS split but worth a brief comment or using a more unique key (e.g., class name + backend name).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/service.ts` around lines 28 - 34, The cache key for cidStorageServicePerBackend is built only from backend.name which can collide across different backend implementations; update CIDStorageService to derive a more unique key (e.g., include the backend implementation identity) by mapping backends to a stable identifier such as `${backend.constructor.name}:${backend.name}` or another unique property on StorageBackend, then sort and join those identifiers before calling cidStorageServicePerBackend.get; reference CIDStorageService, cidStorageServicePerBackend, StorageBackend and CIDStorageServiceImpl when making the change and/or add a short comment explaining why the combined key is necessary.cloud/3rd-party/src/App.tsx (1)
4-4: Relative path traversal intocore/is fragile.
"../../../core/quick-silver/quick-silver.js"couples this example app to the monorepo layout. If the app or the module moves, this breaks silently. Consider re-exportingQuickSilverfrom theuse-fireproofor a dedicated package entry-point once it stabilises.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/3rd-party/src/App.tsx` at line 4, The import in App.tsx directly references "../../../core/quick-silver/quick-silver.js" which is brittle; instead update the code to import QuickSilver from a stable package entry (e.g., from the library package like "use-fireproof" or a new dedicated entry such as "use-fireproof/quick-silver") and add a re-export for QuickSilver in that package's public index (export { QuickSilver } from "./core/quick-silver/quick-silver";) so consumers use the stable symbol QuickSilver rather than a relative path; update App.tsx to import { QuickSilver } from "use-fireproof" (or the chosen entry) and ensure package exports are updated accordingly.core/quick-silver/idx-service/service.ts (1)
5-7:serializeKeyround-trip asymmetry for multi-key entries.Single keys serialize as the raw string (
"apple"), but multi-key entries become JSON ('["apple","banana"]'). Deserialization (readingkeysback from the DB) relies on the storedkeysarray, so this doesn't cause correctness issues today. However, ifserializedKeyis ever used to reconstruct the keys array, the inconsistent format would require branching logic. A brief comment noting the intentional optimisation would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/idx-service/service.ts` around lines 5 - 7, serializeKey currently returns a raw string for single-key entries and JSON for multi-key arrays, creating an asymmetric serialized form; add a brief comment above the serializeKey function explaining this is an intentional optimization (raw string for single keys, JSON for multiple) and warning maintainers that any future deserialization or reconstruction of the original keys must handle both formats (single-string and JSON-array) or change the serialization to a uniform format; reference serializeKey so reviewers can locate the function easily.use-fireproof/base/react/use-fireproof.ts (1)
31-31: Updating JSDoc to reflect the new parameter shape would help consumers.The doc comment on lines 19–27 still shows only
useFireproof("dbname")signatures. Consider adding an example for theDatabase-object overload.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@use-fireproof/base/react/use-fireproof.ts` at line 31, Update the JSDoc for the useFireproof function to document the new parameter overload that accepts a Database object as well as the existing string name overload: modify the comment above export function useFireproof to include the Database type in the `@param` description and add a short example showing useFireproof({ name: "...", adapter: ... }) (or the actual Database object shape used) alongside the existing useFireproof("dbname") example; reference the useFireproof function signature and the UseFPConfig/Database types so consumers can see both invocation forms.core/quick-silver/quick-silver.ts (1)
99-101:destroy()leaves orphaned CID blobs inCIDStorageService.
IdxService().destroyDb(this.name)drops the index but does not touch the corresponding CID storage. All encoded document and file blobs stored viaCIDStorageServicebecome unreachable orphans.Consider calling the equivalent teardown on
CIDStorageServicehere, or document that CID storage cleanup is out of scope for this implementation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 99 - 101, The destroy() implementation currently only calls IdxService().destroyDb(this.name) which drops the index but leaves encoded blobs in CIDStorageService as orphans; update destroy() to also invoke the corresponding teardown on CIDStorageService for this.name (e.g., call the CID storage cleanup method on CIDStorageService with this.name) so CID blobs are removed, or if cleanup is intentionally out of scope add a clear comment and public documentation on quick-silver.ts::destroy() stating that CIDStorageService cleanup must be performed separately and how to do it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/quick-silver/cid-storage/cid-storage.test.ts`:
- Around line 1-83: Prettier flagged formatting issues in this test file; run
the formatter to fix them by running prettier --write on the file or applying
your editor's Prettier action, then re-run CI. Ensure the file containing
functions/variables streamFrom, drainStream, backends, and the describe.each
block (using CIDStorageService, DexieStorageBackendImpl, OPFSStorageBackendImpl)
is saved after formatting so the tests pass style checks.
In `@core/quick-silver/cid-storage/opfs.ts`:
- Around line 84-88: The get() implementation currently swallows all errors from
dir.getFileHandle(cid) and treats them as "not found"; change the error handling
so only genuine "not found" errors are converted to Result.Ok(undefined) and all
other exceptions are propagated (or returned as Result.Error). Specifically, in
the get() function replace the broad catch around dir.getFileHandle(cid) with
logic that inspects the thrown error (e.g., check error.name === 'NotFoundError'
or appropriate DOMException/Quota/TypeError distinctions) and only return
Result.Ok(undefined) for that case, while rethrowing or returning Result.Error
for permission/quota/other failures; mirror the same pattern used to fix
commit().
- Around line 50-63: The commit function currently treats any error from
dir.getFileHandle(cid) as "not found" and proceeds to move the temp file; change
this by inspecting the thrown error from dir.getFileHandle(cid) (the inner
try/catch) and only perform (tempHandle as FileSystemFileHandle & { move(name:
string): Promise<void> }).move(cid) when the error clearly indicates a missing
entry (e.g., DOMException.name === "NotFoundError" or other platform-specific
not-found indicator); for any other error (permission, quota, etc.) rethrow it
so the outer catch can handle/report the correct failure. Ensure you reference
the existing symbols dir.getFileHandle, tempHandle.move, tempName, and commit
when applying the change.
- Line 57: Replace the direct call to await (tempHandle as FileSystemFileHandle
& { move(name: string): Promise<void> }).move(cid) with a feature-detection
branch: check if 'move' exists on FileSystemHandle.prototype (or use typeof
(tempHandle as any).move === 'function') and call move when available; otherwise
implement a fallback that opens the tempHandle file, reads its contents (via
getFile() or a readable stream), writes the data to a new file with the target
name (using getFileHandle/createWritable on the parent directory), and then
remove the temp handle (removeEntry/unlink) to emulate move. Update the code
around tempHandle and any uses in opfs.ts to use this conditional so unsupported
runtimes (Node/Deno/Chromium stable) use the copy-then-delete fallback.
In `@core/quick-silver/cid-storage/test-noble.html`:
- Around line 1-131: Prettier reports a formatting failure for
core/quick-silver/cid-storage/test-noble.html; fix it by running the project
formatter (e.g., run prettier --write
core/quick-silver/cid-storage/test-noble.html) and commit the rewritten file so
functions like streamFrom, hashingTap, drain and runTest reflect the formatted
HTML/JS; ensure no other unrelated changes are staged before committing.
- Line 118: The call log("%c✓ CIDs match", "color:green") writes to
out.textContent and misuses the console %c directive; remove that log() call and
either use console.log("%c✓ CIDs match", "color:green") for console output or
append the styled message to the page (e.g. update out.innerHTML or add a <span
class="ok">✓ CIDs match</span>) so the page output is not garbled; update
references around log(...) and the existing span.ok insertion to keep a single,
correctly styled message.
In `@core/quick-silver/idx-service/service.ts`:
- Around line 126-131: The ReadableStream start callback uses
collection.each(...) without handling rejections, so if collection.each throws
the stream never closes; update the ReadableStream<IdxEntry> start
implementation to wrap the await collection.each(...) in try/catch/finally,
calling ctrl.enqueue for entries as before, and on error call ctrl.error(err)
(or at minimum ctrl.close() if you prefer to surface a closed stream) so
consumers won't hang; reference the ReadableStream start, collection.each,
ctrl.enqueue and ctrl.close symbols when applying the change.
In `@core/quick-silver/quick-silver.ts`:
- Around line 288-343: Secondary indexes are only built when empty (hasEntries),
so subsequent writes/deletes never update them; make query rebuild when an index
is stale by adding per-index staleness tracking. Add a dirty flag map keyed by
idxName (e.g., QuickSilver._idxDirty or similar), set that flag to true inside
bulk/put/del implementations whenever they write or remove docs, then in the
query logic shown (where hasEntries is computed from IdxService().query and
idxName is used) check the dirty flag and force the rebuild when dirty; after
finishing the rebuild (the loop using primaryReader and IdxService().addToIdx)
clear the dirty flag. Ensure the flag is checked/cleared atomically or under the
same lock used for building indexes to avoid races.
- Line 1: The file fails Prettier formatting; run Prettier on
core/quick-silver/quick-silver.ts (e.g., prettier --write
core/quick-silver/quick-silver.ts), stage and commit the reformatted changes so
CI passes, and ensure your editor/IDE respects project Prettier settings (check
.prettierrc) so imports like the top-level "import type {" and the rest of
quick-silver.ts are consistently formatted.
- Around line 196-199: The synchronous call
this.sthis.ende.cbor.encodeToUint8(data) can throw and currently escapes the
bulk() loop; wrap the encode step in a try/catch inside the loop (before calling
CIDStorageService().store) and on catch handle the error the same way as other
per-document failures (e.g., log or record the failure and continue), then pass
the safely-obtained uint8array to uint8array2stream and call
CIDStorageService().store(uint8array2stream(encoded)). Ensure you reference the
same symbols (this.sthis.ende.cbor.encodeToUint8, uint8array2stream,
CIDStorageService(), and the enclosing bulk loop) so the encode exception
doesn't reject the whole bulk() call.
- Around line 290-297: The partial stream read uses checkReader.read() to detect
one item but only calls checkReader.releaseLock(), which can leave an underlying
cursor/transaction open; update the logic around rCheck.Ok().getReader(),
checkReader.read(), and hasEntries so that if a read returns an item (hasEntries
true) you call await checkReader.cancel() before releasing the lock (keeping
releaseLock() in the finally block), and ensure any errors from cancel() are
handled (e.g., try/catch around cancel()) so the reader is properly closed even
when a single item was consumed.
In `@core/quick-silver/SessionLog.md`:
- Line 14: The README entry is outdated: replace the stale type names and guards
referenced in SessionLog.md with the current exports from envelope.ts — change
`QCDoc`, `QCFile`, `QCEnvelope` and `isQCDoc/isQCFile/isQCEnvelope` to
`QSDocMeta`, `QSFileMeta` (and remove or clarify `QCEnvelope` if not exported)
and update guards to `isQSDocMeta` and `isQSFileMeta`; alternatively add a short
note that the log was written before the rename and list both old and new names
for clarity so references align with the actual exports (`QSDocMeta`,
`QSFileMeta`, `isQSDocMeta`, `isQSFileMeta`).
In `@use-fireproof/base/react/use-fireproof.ts`:
- Around line 34-39: The downstream hooks (useDocument, useLiveQuery, etc.)
currently depend on [database.name, strConfig] and can miss changes when the
database reference changes but the name is identical; update their dependency
arrays to include the memoized database reference (the local variable database
produced by useMemo) or an identity token derived from it so closures see the
new instance; locate the useMemo that defines database and then add database (or
databaseRef/id) into the dependency lists of the downstream hooks to ensure they
recompute when the Database object identity changes.
---
Outside diff comments:
In `@cloud/3rd-party/src/App.tsx`:
- Around line 40-44: The UI is rendering inert attach state/token fields because
attach (previously from toCloud) is now replaced by databaseFactory and will
return a noop/default; update the UI to either remove these stale references or
wire them to the new API: replace uses of attach.state and
attach.ctx.tokenAndClaims with the new databaseFactory-derived status/token
accessor (or a proper hook that exposes connection state), or if you intend to
keep placeholders, add a clear comment above the JSX noting this is deliberate
scaffolding; locate the JSX that references attach (attach.state,
attach.ctx.tokenAndClaims.tokenAndClaims.token) and change it to use the new
connection/status functions or remove/comment the block accordingly.
---
Duplicate comments:
In `@core/quick-silver/SessionLog.md`:
- Around line 1-42: The file core/quick-silver/SessionLog.md is failing Prettier
formatting in CI; run the project's Prettier config (e.g., npx prettier --write)
or your editor's format command on SessionLog.md to apply the configured rules
(line wrapping, trailing whitespace, markdown rules), then commit the updated
file so CI passes; ensure you don't change content semantics (headings, bullet
lists) while fixing only formatting.
---
Nitpick comments:
In `@cloud/3rd-party/src/App.tsx`:
- Line 4: The import in App.tsx directly references
"../../../core/quick-silver/quick-silver.js" which is brittle; instead update
the code to import QuickSilver from a stable package entry (e.g., from the
library package like "use-fireproof" or a new dedicated entry such as
"use-fireproof/quick-silver") and add a re-export for QuickSilver in that
package's public index (export { QuickSilver } from
"./core/quick-silver/quick-silver";) so consumers use the stable symbol
QuickSilver rather than a relative path; update App.tsx to import { QuickSilver
} from "use-fireproof" (or the chosen entry) and ensure package exports are
updated accordingly.
In `@core/quick-silver/cid-storage/cid-storage.test.ts`:
- Line 65: Avoid instantiating a backend just to read its name: stop calling
make().name inside the test when constructing the fake URL; instead get the
backend name from a constant or from the existing backends array (the same entry
that provides make) and use that value when building the URL passed to svc.get.
Update the test so it references the backend name (from the stored constant or
backends[index].name) rather than creating a new instance via make().
- Around line 70-82: These two tests in cid-storage.test.ts are duplicates;
update the second test (currently titled "cid is deterministic — same content
always yields same cid") to instead verify determinism across separate service
instances by creating a new svc (or a fresh instance of the storage service),
storing the same bytes via svc.store and newSvc.store (use streamFrom(bytes) for
each), and assert r1.Ok().cid equals r2.Ok().cid; alternatively remove one of
the two tests if you prefer to keep only one idempotency check. Ensure you
reference svc.store and streamFrom(bytes) when locating the code to change.
- Around line 33-34: CIDStorageService is being created once at the describe
scope (describe.each/backends) as svc = CIDStorageService({ backends: [make()]
}) causing shared mutable state across tests; change to create a fresh service
per test (move instantiation into a beforeEach or inside each test using make())
or ensure storage is cleared between tests by adding an afterEach/afterAll
cleanup that clears the backend (call the backend teardown/clear methods on svc
or the object returned by make()); reference CIDStorageService, backends,
make(), and svc to locate where to adjust instantiation or add cleanup.
In `@core/quick-silver/cid-storage/dexie.ts`:
- Around line 12-14: Add a JSDoc comment above the DexieStorageBackend export
explaining that this factory wraps Lazy and therefore returns a singleton: the
first invocation’s arguments (e.g., name) are the ones used and subsequent calls
ignore their arguments and return the same instance; reference the Lazy wrapper,
DexieStorageBackend export, and DexieStorageBackendImpl so readers know where
this behavior originates and what to expect when calling DexieStorageBackend({
name: ... }).
In `@core/quick-silver/cid-storage/opfs.ts`:
- Line 42: The double-cast on the write call (await writable.write(value as
unknown as ArrayBufferView<ArrayBuffer>);) is overly broad; replace it with a
single, accurate type or remove it if value is already a Uint8Array.
Specifically, in the code that calls writable.write(...) (references: writable,
write, value) change the assertion to use BufferSource (or no cast if value is
already a BufferSource/Uint8Array) so the TypeScript type matches the
FileSystemWritableFileStream.write() specification instead of using as unknown
as ArrayBufferView<ArrayBuffer>.
In `@core/quick-silver/cid-storage/service.ts`:
- Around line 28-34: The cache key for cidStorageServicePerBackend is built only
from backend.name which can collide across different backend implementations;
update CIDStorageService to derive a more unique key (e.g., include the backend
implementation identity) by mapping backends to a stable identifier such as
`${backend.constructor.name}:${backend.name}` or another unique property on
StorageBackend, then sort and join those identifiers before calling
cidStorageServicePerBackend.get; reference CIDStorageService,
cidStorageServicePerBackend, StorageBackend and CIDStorageServiceImpl when
making the change and/or add a short comment explaining why the combined key is
necessary.
In `@core/quick-silver/cid-storage/test-noble.html`:
- Around line 20-24: The imports using esm.sh for sha256, CID, raw, and
createDigest are unpinned and may break on upstream changes; update the module
URLs to include explicit version pins matching package.json (for example change
"https://esm.sh/@noble/hashes/sha2" to a versioned URL like
"https://esm.sh/@noble/hashes@<version>/sha2" and likewise pin
"multiformats@<version>/cid", "multiformats@<version>/codecs/raw", and
"multiformats@<version>/hashes/digest") so the sha256, CID, raw, and create
(createDigest) imports resolve to fixed versions.
In `@core/quick-silver/idx-service/service.ts`:
- Around line 5-7: serializeKey currently returns a raw string for single-key
entries and JSON for multi-key arrays, creating an asymmetric serialized form;
add a brief comment above the serializeKey function explaining this is an
intentional optimization (raw string for single keys, JSON for multiple) and
warning maintainers that any future deserialization or reconstruction of the
original keys must handle both formats (single-string and JSON-array) or change
the serialization to a uniform format; reference serializeKey so reviewers can
locate the function easily.
In `@core/quick-silver/package.json`:
- Around line 1-52: Add a "types" entry pointing to the package's type
declarations (e.g., "types": "./index.d.ts") and add a modern "exports" map
alongside the existing "main" to explicitly expose entrypoints for consumers
(e.g., export the default entry for "." with both import and require or
node/deno subpaths as needed). Update package.json keys "types", "exports" while
keeping "main": "./index.js" to ensure TypeScript and modern ESM consumers
resolve the package and its declarations correctly; ensure the exports map
references the same "./index.js" and the types file path used in "types".
- Around line 39-51: The package.json has inconsistent workspace specifiers:
some deps in "dependencies" use "workspace:*" (e.g.,
"@fireproof/core-gateways-indexeddb", "@fireproof/core-types-base") while others
use "workspace:0.0.0" (e.g., "@fireproof/core-runtime" and in "devDependencies"
"@fireproof/core-cli"); decide and standardize them (either change all workspace
pins to "workspace:*" for local resolution or to a specific "workspace:VERSION"
for pinned behavior) and update the entries in the dependencies and
devDependencies sections of package.json accordingly, and if keeping mixed
behavior add a short comment in the repo contributing docs explaining the
convention.
In `@core/quick-silver/quick-silver.ts`:
- Around line 99-101: The destroy() implementation currently only calls
IdxService().destroyDb(this.name) which drops the index but leaves encoded blobs
in CIDStorageService as orphans; update destroy() to also invoke the
corresponding teardown on CIDStorageService for this.name (e.g., call the CID
storage cleanup method on CIDStorageService with this.name) so CID blobs are
removed, or if cleanup is intentionally out of scope add a clear comment and
public documentation on quick-silver.ts::destroy() stating that
CIDStorageService cleanup must be performed separately and how to do it.
In `@core/types/base/types.ts`:
- Line 219: The current factory type readonly databaseFactory?: (name: string,
opts: ConfigOpts) => Database is synchronous but may force implementations to
hide async init inside Database.ready; decide and make the contract explicit:
either change the signature to readonly databaseFactory?: (name: string, opts:
ConfigOpts) => Promise<Database> to support async construction, or keep the sync
signature but add documentation/comments on the Database interface (and
QuickSilver/ready memoization) stating implementations must defer async
initialization to Database.ready; update types and comments accordingly and
adjust any call sites that construct Databases to match the chosen contract.
In `@use-fireproof/base/react/use-fireproof.ts`:
- Line 31: Update the JSDoc for the useFireproof function to document the new
parameter overload that accepts a Database object as well as the existing string
name overload: modify the comment above export function useFireproof to include
the Database type in the `@param` description and add a short example showing
useFireproof({ name: "...", adapter: ... }) (or the actual Database object shape
used) alongside the existing useFireproof("dbname") example; reference the
useFireproof function signature and the UseFPConfig/Database types so consumers
can see both invocation forms.
| const commit = async (cid: string): Promise<Result<void>> => { | ||
| try { | ||
| try { | ||
| await dir.getFileHandle(cid); // throws if not found | ||
| // target already exists — discard temp instead of overwriting | ||
| await dir.removeEntry(tempName); | ||
| } catch { | ||
| await (tempHandle as FileSystemFileHandle & { move(name: string): Promise<void> }).move(cid); | ||
| } | ||
| return Result.Ok(undefined); | ||
| } catch (e) { | ||
| return Result.Err(e instanceof Error ? e : new Error(String(e))); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Commit conflates "not found" with other getFileHandle errors.
The inner catch on line 56 assumes that any error from getFileHandle(cid) means "file doesn't exist," but it could also be a permission or quota error. In that case, the code would attempt move() which could also fail, and the outer catch would surface a misleading error.
Consider checking the error type:
Proposed fix
try {
await dir.getFileHandle(cid); // throws if not found
// target already exists — discard temp instead of overwriting
await dir.removeEntry(tempName);
- } catch {
+ } catch (err) {
+ if (!(err instanceof DOMException && err.name === "NotFoundError")) {
+ return Result.Err(err instanceof Error ? err : new Error(String(err)));
+ }
await (tempHandle as FileSystemFileHandle & { move(name: string): Promise<void> }).move(cid);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/cid-storage/opfs.ts` around lines 50 - 63, The commit
function currently treats any error from dir.getFileHandle(cid) as "not found"
and proceeds to move the temp file; change this by inspecting the thrown error
from dir.getFileHandle(cid) (the inner try/catch) and only perform (tempHandle
as FileSystemFileHandle & { move(name: string): Promise<void> }).move(cid) when
the error clearly indicates a missing entry (e.g., DOMException.name ===
"NotFoundError" or other platform-specific not-found indicator); for any other
error (permission, quota, etc.) rethrow it so the outer catch can handle/report
the correct failure. Ensure you reference the existing symbols
dir.getFileHandle, tempHandle.move, tempName, and commit when applying the
change.
| // target already exists — discard temp instead of overwriting | ||
| await dir.removeEntry(tempName); | ||
| } catch { | ||
| await (tempHandle as FileSystemFileHandle & { move(name: string): Promise<void> }).move(cid); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
FileSystemFileHandle move method browser support 2025
💡 Result:
As of 2025, the FileSystemHandle.move() method (usable from FileSystemFileHandle/FileSystemDirectoryHandle) has this browser support:
- Safari / iOS Safari: Supported since 15.2. [1][2]
- Firefox (desktop & Android): Supported since 111. [1][2]
- Chromium browsers (Chrome / Edge / Opera, desktop & Android): Not supported in stable (no unflagged support). [1][2]
Notes for Chromium:
- Chrome’s own docs indicate
move()is shipped for OPFS (Origin Private File System), but moving to/from outside OPFS is behind a flag, and directory moves aren’t supported. [3]
Detect it with:
if ('move' in FileSystemHandle.prototype) {
// move() available
}Sources: Can I use table for FileSystemHandle.move [1], BCD snapshot [2], Chrome Developers article (OPFS/flag notes) [3].
🏁 Script executed:
# Find and examine the opfs.ts file
find . -name "opfs.ts" -type fRepository: fireproof-storage/fireproof
Length of output: 110
🏁 Script executed:
# Read the opfs.ts file to see the context around line 57
cat -n ./core/quick-silver/cid-storage/opfs.ts | head -80Repository: fireproof-storage/fireproof
Length of output: 3412
Add fallback for FileSystemFileHandle.move() in unsupported runtimes.
move() is not yet supported in Chromium stable (including Node.js/Deno). While Safari 15.2+ and Firefox 111+ support it, this code has no fallback for runtimes that lack it—the commit will simply fail. Implement a feature detection check (if ('move' in FileSystemHandle.prototype)) with a fallback that copies the file to the new name and deletes the temp, or document that this backend requires move() support.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/cid-storage/opfs.ts` at line 57, Replace the direct call to
await (tempHandle as FileSystemFileHandle & { move(name: string): Promise<void>
}).move(cid) with a feature-detection branch: check if 'move' exists on
FileSystemHandle.prototype (or use typeof (tempHandle as any).move ===
'function') and call move when available; otherwise implement a fallback that
opens the tempHandle file, reads its contents (via getFile() or a readable
stream), writes the data to a new file with the target name (using
getFileHandle/createWritable on the parent directory), and then remove the temp
handle (removeEntry/unlink) to emulate move. Update the code around tempHandle
and any uses in opfs.ts to use this conditional so unsupported runtimes
(Node/Deno/Chromium stable) use the copy-then-delete fallback.
| try { | ||
| fileHandle = await dir.getFileHandle(cid); | ||
| } catch { | ||
| return Result.Ok(undefined); | ||
| } |
There was a problem hiding this comment.
get() also conflates "not found" with other errors.
Same concern as in commit() — catching all exceptions from getFileHandle(cid) and treating them as "not found" can mask real failures (quota, permission, etc.).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/cid-storage/opfs.ts` around lines 84 - 88, The get()
implementation currently swallows all errors from dir.getFileHandle(cid) and
treats them as "not found"; change the error handling so only genuine "not
found" errors are converted to Result.Ok(undefined) and all other exceptions are
propagated (or returned as Result.Error). Specifically, in the get() function
replace the broad catch around dir.getFileHandle(cid) with logic that inspects
the thrown error (e.g., check error.name === 'NotFoundError' or appropriate
DOMException/Quota/TypeError distinctions) and only return Result.Ok(undefined)
for that case, while rethrowing or returning Result.Error for
permission/quota/other failures; mirror the same pattern used to fix commit().
core/quick-silver/quick-silver.ts
Outdated
| const rData = await CIDStorageService().store(uint8array2stream(this.sthis.ende.cbor.encodeToUint8(data))); | ||
| if (rData.isErr()) { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Synchronous encodeToUint8 throw bypasses per-doc continue error handling.
this.sthis.ende.cbor.encodeToUint8(data) is called synchronously. If it throws (e.g., non-serializable value like a Symbol, circular reference), the exception unwinds the entire bulk() for-loop and the function rejects entirely, unlike all other failure paths in this loop which use continue. This is inconsistent and unexpected for callers.
🛠️ Proposed fix
- const rData = await CIDStorageService().store(uint8array2stream(this.sthis.ende.cbor.encodeToUint8(data)));
- if (rData.isErr()) {
- continue;
- }
+ let encodedData: Uint8Array;
+ try {
+ encodedData = this.sthis.ende.cbor.encodeToUint8(data);
+ } catch {
+ continue;
+ }
+ const rData = await CIDStorageService().store(uint8array2stream(encodedData));
+ if (rData.isErr()) {
+ continue;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const rData = await CIDStorageService().store(uint8array2stream(this.sthis.ende.cbor.encodeToUint8(data))); | |
| if (rData.isErr()) { | |
| continue; | |
| } | |
| let encodedData: Uint8Array; | |
| try { | |
| encodedData = this.sthis.ende.cbor.encodeToUint8(data); | |
| } catch { | |
| continue; | |
| } | |
| const rData = await CIDStorageService().store(uint8array2stream(encodedData)); | |
| if (rData.isErr()) { | |
| continue; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 196 - 199, The synchronous
call this.sthis.ende.cbor.encodeToUint8(data) can throw and currently escapes
the bulk() loop; wrap the encode step in a try/catch inside the loop (before
calling CIDStorageService().store) and on catch handle the error the same way as
other per-document failures (e.g., log or record the failure and continue), then
pass the safely-obtained uint8array to uint8array2stream and call
CIDStorageService().store(uint8array2stream(encoded)). Ensure you reference the
same symbols (this.sthis.ende.cbor.encodeToUint8, uint8array2stream,
CIDStorageService(), and the enclosing bulk loop) so the encode exception
doesn't reject the whole bulk() call.
| const rCheck = await IdxService().query({ dbname: this.name, idxName }); | ||
| if (rCheck.isErr()) throw rCheck.Err(); | ||
| const checkReader = rCheck.Ok().getReader(); | ||
| let hasEntries = false; | ||
| try { | ||
| const { done } = await checkReader.read(); | ||
| hasEntries = !done; | ||
| } finally { | ||
| checkReader.releaseLock(); | ||
| } | ||
|
|
||
| if (!hasEntries) { | ||
| const rPrimary = await IdxService().query({ dbname: this.name, idxName: "_id" }); | ||
| if (rPrimary.isErr()) throw rPrimary.Err(); | ||
| const primaryReader = rPrimary.Ok().getReader(); | ||
| try { | ||
| interface PendingEmit { k: IndexKeyType; v?: DocFragment } | ||
| while (true) { | ||
| const { done, value: entry } = await primaryReader.read(); | ||
| if (done) break; | ||
|
|
||
| let doc: DocWithId<T>; | ||
| try { | ||
| doc = await this.get<T>(entry.keys[0]); | ||
| } catch { | ||
| continue; | ||
| } | ||
| const docId = entry.keys[0]; | ||
|
|
||
| const pending: PendingEmit[] = []; | ||
| if (typeof field === "string") { | ||
| const key = (doc as unknown as Record<string, unknown>)[field]; | ||
| if (key !== undefined) pending.push({ k: key as IndexKeyType }); | ||
| } else { | ||
| const emit = (k: IndexKeyType, v?: DocFragment) => pending.push({ k, v }); | ||
| const ret = field(doc, emit); | ||
| if (pending.length === 0 && ret !== undefined && ret !== null) { | ||
| pending.push({ k: ret as IndexKeyType }); | ||
| } | ||
| } | ||
|
|
||
| for (const { k, v } of pending) { | ||
| await IdxService().addToIdx({ | ||
| dbname: this.name, | ||
| idxName, | ||
| keys: [JSON.stringify(k), docId], | ||
| cidUrl: entry.cidUrl, | ||
| primaryKey: docId, | ||
| meta: v !== undefined ? [{ type: "qs.emit.value", key: docId, payload: v }] : undefined, | ||
| }); | ||
| } | ||
| } | ||
| } finally { | ||
| primaryReader.releaseLock(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Secondary indexes are frozen after the first query() call — new writes/deletes are invisible.
The !hasEntries guard builds a secondary index only when it is completely empty. After the first query() call populates the index, hasEntries will always be true, so the rebuild is permanently skipped. Documents written (via bulk/put) or deleted (via del) after the initial build are never reflected in subsequent query() results.
Concrete failure scenario:
- Write docs A, B, C.
- Call
query("name")→ secondary index built from A, B, C. - Write doc D.
- Call
query("name")again →hasEntries = true, rebuild skipped → D is absent from results.
The fix requires tracking secondary index staleness. Options:
- Invalidation flag: on every
bulk/del, set a dirty flag per secondary index name;queryrebuilds when dirty. - Incremental update:
bulkupdates all known secondary indexes when writing;delremoves entries from them. - Always rebuild: drop the
hasEntriesguard (simplest correctness; expensive for large datasets).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 288 - 343, Secondary indexes
are only built when empty (hasEntries), so subsequent writes/deletes never
update them; make query rebuild when an index is stale by adding per-index
staleness tracking. Add a dirty flag map keyed by idxName (e.g.,
QuickSilver._idxDirty or similar), set that flag to true inside bulk/put/del
implementations whenever they write or remove docs, then in the query logic
shown (where hasEntries is computed from IdxService().query and idxName is used)
check the dirty flag and force the rebuild when dirty; after finishing the
rebuild (the loop using primaryReader and IdxService().addToIdx) clear the dirty
flag. Ensure the flag is checked/cleared atomically or under the same lock used
for building indexes to avoid races.
| const checkReader = rCheck.Ok().getReader(); | ||
| let hasEntries = false; | ||
| try { | ||
| const { done } = await checkReader.read(); | ||
| hasEntries = !done; | ||
| } finally { | ||
| checkReader.releaseLock(); | ||
| } |
There was a problem hiding this comment.
Partial stream read without cancel() may leave the underlying cursor open.
When hasEntries is true (one item was read successfully), releaseLock() is called but the stream is never cancel()led. If the IdxService backend uses an IndexedDB cursor internally, releasing the lock without cancelling does not necessarily close the cursor/transaction.
🛠️ Proposed fix
const checkReader = rCheck.Ok().getReader();
let hasEntries = false;
try {
const { done } = await checkReader.read();
hasEntries = !done;
} finally {
checkReader.releaseLock();
+ if (hasEntries) rCheck.Ok().cancel();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const checkReader = rCheck.Ok().getReader(); | |
| let hasEntries = false; | |
| try { | |
| const { done } = await checkReader.read(); | |
| hasEntries = !done; | |
| } finally { | |
| checkReader.releaseLock(); | |
| } | |
| const checkReader = rCheck.Ok().getReader(); | |
| let hasEntries = false; | |
| try { | |
| const { done } = await checkReader.read(); | |
| hasEntries = !done; | |
| } finally { | |
| checkReader.releaseLock(); | |
| if (hasEntries) rCheck.Ok().cancel(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 290 - 297, The partial stream
read uses checkReader.read() to detect one item but only calls
checkReader.releaseLock(), which can leave an underlying cursor/transaction
open; update the logic around rCheck.Ok().getReader(), checkReader.read(), and
hasEntries so that if a read returns an item (hasEntries true) you call await
checkReader.cancel() before releasing the lock (keeping releaseLock() in the
finally block), and ensure any errors from cancel() are handled (e.g., try/catch
around cancel()) so the reader is properly closed even when a single item was
consumed.
| - `fireproof.ts` — `fireproof(name, opts?)` factory, singleton per name via `KeyedResolvOnce<Database>` | ||
| - `quick-silver.ts` — `QuickSilver` class implementing `Database` | ||
| - `types.ts` — `QSConfigOpts { sthis?: SuperThis }` | ||
| - `envelope.ts` — arktype schemas `QCDoc`, `QCFile`, `QCEnvelope` + type guards `isQCDoc/isQCFile/isQCEnvelope` |
There was a problem hiding this comment.
Stale references to renamed types
Line 14 references QCDoc, QCFile, QCEnvelope and guards isQCDoc/isQCFile/isQCEnvelope, but envelope.ts actually exports QSFileMeta, QSDocMeta, isQSFileMeta, and isQSDocMeta. Consider updating this log to match, or note it was written before the rename.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/SessionLog.md` at line 14, The README entry is outdated:
replace the stale type names and guards referenced in SessionLog.md with the
current exports from envelope.ts — change `QCDoc`, `QCFile`, `QCEnvelope` and
`isQCDoc/isQCFile/isQCEnvelope` to `QSDocMeta`, `QSFileMeta` (and remove or
clarify `QCEnvelope` if not exported) and update guards to `isQSDocMeta` and
`isQSFileMeta`; alternatively add a short note that the log was written before
the rename and list both old and new names for clarity so references align with
the actual exports (`QSDocMeta`, `QSFileMeta`, `isQSDocMeta`, `isQSFileMeta`).
| const database = useMemo(() => { | ||
| if (isDatabase(nameOrDatabase)) { | ||
| return nameOrDatabase; | ||
| } | ||
| return (config.databaseFactory ?? fireproof)(nameOrDatabase, config); | ||
| }, [nameOrDatabase, strConfig]); |
There was a problem hiding this comment.
useMemo dependency arrays for downstream hooks may miss Database-reference changes.
When nameOrDatabase is a Database object, the database memo (line 34) correctly recomputes when the reference changes. However, the downstream hooks on lines 42–45 depend on [database.name, strConfig], not on the database reference itself. If a caller passes a different Database instance that happens to share the same .name, the downstream hooks (useDocument, useLiveQuery, etc.) will keep using closures over the stale database object.
Consider adding database (or an identity-based key like a ref-id) to the downstream dependency arrays:
Proposed fix
- const useDocument = useMemo(() => createUseDocument(database), [database.name, strConfig]);
- const useLiveQuery = useMemo(() => createUseLiveQuery(database), [database.name, strConfig]);
- const useAllDocs = useMemo(() => createUseAllDocs(database), [database.name, strConfig]);
- const useChanges = useMemo(() => createUseChanges(database), [database.name, strConfig]);
+ const useDocument = useMemo(() => createUseDocument(database), [database, strConfig]);
+ const useLiveQuery = useMemo(() => createUseLiveQuery(database), [database, strConfig]);
+ const useAllDocs = useMemo(() => createUseAllDocs(database), [database, strConfig]);
+ const useChanges = useMemo(() => createUseChanges(database), [database, strConfig]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const database = useMemo(() => { | |
| if (isDatabase(nameOrDatabase)) { | |
| return nameOrDatabase; | |
| } | |
| return (config.databaseFactory ?? fireproof)(nameOrDatabase, config); | |
| }, [nameOrDatabase, strConfig]); | |
| const database = useMemo(() => { | |
| if (isDatabase(nameOrDatabase)) { | |
| return nameOrDatabase; | |
| } | |
| return (config.databaseFactory ?? fireproof)(nameOrDatabase, config); | |
| }, [nameOrDatabase, strConfig]); | |
| const useDocument = useMemo(() => createUseDocument(database), [database, strConfig]); | |
| const useLiveQuery = useMemo(() => createUseLiveQuery(database), [database, strConfig]); | |
| const useAllDocs = useMemo(() => createUseAllDocs(database), [database, strConfig]); | |
| const useChanges = useMemo(() => createUseChanges(database), [database, strConfig]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@use-fireproof/base/react/use-fireproof.ts` around lines 34 - 39, The
downstream hooks (useDocument, useLiveQuery, etc.) currently depend on
[database.name, strConfig] and can miss changes when the database reference
changes but the name is identical; update their dependency arrays to include the
memoized database reference (the local variable database produced by useMemo) or
an identity token derived from it so closures see the new instance; locate the
useMemo that defines database and then add database (or databaseRef/id) into the
dependency lists of the downstream hooks to ensure they recompute when the
Database object identity changes.
…scribe, and streaming query
- Types package: QSReqGet/Put/Query, QSReqRegisterSubscribe/Unregister, QSResGet/Put/Err/NotFound,
QSResRegisterSubscribe (carries db), QSEvtSubscribe (msg: {key, data}), QSOpRes union, isXXX guards
- API client (qs-api.ts): singleton per URL via KeyedResolvOnce, per-op tid for get/put correlation,
ReadableStream returns for get/put/query, QSSubscribeHandle {events, close} for subscribe,
timeouted wrapper for request timeout, Lazy WS connect
- CLI (cli.ts): get (--keys repeatable), put (--pkg key,{json} repeatable), query, subscribe commands
- CF QSRoom DO: routes get/put/query to QSDBStore via LRU-cached WS per db; handles
register/unregister subscribe inline; uses ws.serializeAttachment/deserializeAttachment +
ctx.getWebSockets() for hibernation-safe subscription state; fans out QSEvtSubscribe to subscribers
rewriting tid per subscriber
- CF QSDBStore DO: SQLite-backed store via Evento pattern; emits QSEvtSubscribe after every put
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… file CID hashing
- Add subscribe protocol types: QSReqRegisterSubscribe/Unregister,
QSResRegisterSubscribe (carries db), QSEvtSubscribe (msg:{key,data}, no arg)
- Add qs-room-evento.ts: Evento-based dispatch for QSRoom with ctx:this pattern;
room(ctx)/clientWs(ctx) helpers mirror handlerSql(ctx) from db-evento
- Fix ctx.send.provider access: Evento wraps the send provider in EventoSend;
must use (ctx.send as {provider:QSSendProvider}).provider.ws to reach ws
- QSRoom uses ws.serializeAttachment/deserializeAttachment for hibernation-safe
subscription registry; dispatch via ctx.getWebSockets()
- Emit QSEvtSubscribe from db-store after every successful put
- API: QSSubscribeHandle {events, close()} instead of bare ReadableStream;
per-op tid with Set<pending> for multi-op correlation
- CLI: subscribe command with SIGINT/SIGTERM cleanup; --pkg and --keys as
multioption; process.exit(0) after run completes
- core/envelope: refactor to QS_Base/Doc/File/DocFile nested structure
- core/quick-silver: real file CID hashing via hashBlobAsync; inline file
bytes in QCFile; get() resolves fileRefs into QS_DocFile
- core/runtime: add hashBlobAsync(uint8) for File/Blob/Uint8Array hashing
- pnpm: register cloud/quick-silver/* workspaces
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… with LRU doc cache - Add CIDStorageService (Dexie/OPFS backends): content-addressed storage with SHA-256 CID hashing - Add IdxService (Dexie): keyed index with meta-merging, soft-delete, asc/desc order, select filter - Rename key→keys:string[] on IdxEntry/AddToIdxOpts for correctness; add primaryKey for cache lookup - Implement QuickSilver.get/put/bulk/del/allDocs/allDocuments/query/destroy/subscribe on Database API - query: lazy persistent secondary indexes (field string → _field_<name>, MapFn → _map_<hash>) - query: key/keys/range/prefix/descending/limit/includeDocs opts; streams directly from IdxService - Doc cache: KeyedResolvOnce<DocWithId> with LRU eviction (setParam lru.maxEntries), invalidated on write/delete - Add _meta?: unknown[] to DocBase and databaseFactory to ConfigOpts in core-types-base - useFireproof: support databaseFactory in config for injecting alternative Database implementations - 52 tests passing (10 cid-storage, 11 idx-service, 31 fireproof) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
118d876 to
22df480
Compare
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cloud/3rd-party/src/App.tsx (1)
44-55:⚠️ Potential issue | 🟠 MajorStale
attachUI references are unsafe with a non-clouddatabaseFactory.
useFireproofis called withdatabaseFactoryonly (noattachconfig), socreateAttachreturns a default/detached state. Lines 44–55 accessattach.ctx.tokenAndClaims.state,attach.ctx.tokenAndClaims.tokenAndClaims.token, andattach.error.message. If the default attach state doesn't carryctx.tokenAndClaimsorerror, these accesses will throw at runtime.Either guard the accesses or remove the cloud-attach UI block entirely since this example is now demonstrating the local
QuickSilverpath.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/3rd-party/src/App.tsx` around lines 44 - 55, The UI currently dereferences properties on a possibly-detached attach (created via useFireproof/databaseFactory) causing runtime errors; update the render and click handlers to only access attach.ctx.tokenAndClaims and attach.error when attach.state indicates those exist (e.g., guard with attach.state === "ready" before reading attach.ctx.tokenAndClaims.tokenAndClaims.token and before calling attach.ctx.tokenAndClaims.reset(), and check attach.state === "error" and attach.error != null before reading attach.error.message), or remove the entire cloud-attach card if you intend to show only the local QuickSilver path (references: attach, useFireproof, createAttach, attach.ctx.tokenAndClaims, attach.error.message).
♻️ Duplicate comments (18)
core/gateways/indexeddb/gateway-impl.ts (1)
170-170:⚠️ Potential issue | 🟡 MinorRemove debug
console.logbefore merging.The
console.log(">>>>>>", url.toString())on line 170 is a leftover debug artifact. The structured logger call immediately below it already captures the same URL information through the proper logging channel.🧹 Proposed fix
- console.log(">>>>>>", url.toString()); sthis.logger.Debug().Url(url).Str("key", key).Str("store", store).Msg("putting");Based on learnings,
console.logstatements should be avoided in production code; they are only acceptable in example/demo code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/gateways/indexeddb/gateway-impl.ts` at line 170, Remove the leftover debug console.log call by deleting the line console.log(">>>>>>", url.toString()); in the gateway implementation so only the structured logger remains; locate the print in the same block where the structured log that references url.toString() is invoked (gateway-impl.ts, near the code handling URL construction/usage) and ensure no other console.log debug prints remain in that function or surrounding methods.core/quick-silver/cid-storage/test-noble.html (1)
141-143:log("%c✓ CIDs match", "color:green")garbles the page output
log()appends totextContent; the%cdirective is only interpreted by the browser console. The page will display the literal text%c✓ CIDs match color:greenfollowed by the<span class="ok">line, producing duplicated and garbled output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/test-noble.html` around lines 141 - 143, The page output is garbled because log("%c✓ CIDs match", "color:green") uses a console-only "%c" formatting token but log() appends to textContent; replace that call with a console logging call or remove the "%c" styling so the on-page function isn't given console-format tokens—e.g., change the log(...) invocation to console.log("%c✓ CIDs match", "color:green") or to log("✓ CIDs match") to avoid the duplicated/garbled output while keeping the existing DOM update that appends the <span class="ok"> line.core/runtime/utils.ts (1)
618-618:⚠️ Potential issue | 🔴 Critical
ToUint8(lowercaseu) is still unresolved —ToUInt8(uppercaseI) is the correct imported type.
ToUInt8is imported from@fireproof/core-types-baseat line 31, but the function signature on line 618 usesToUint8(lowercasei), which is not defined or imported anywhere. This was flagged in a prior review as "✅ Addressed" but remains in the current code and will cause a TypeScript compile error.Proposed fix
-export async function hashBlobAsync(uint8: ToUint8): Promise<string> { +export async function hashBlobAsync(uint8: ToUInt8): Promise<string> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/runtime/utils.ts` at line 618, The function signature for hashBlobAsync uses the misspelled/undefined type ToUint8; replace that with the correctly imported ToUInt8 from `@fireproof/core-types-base` so TypeScript compiles. Update the hashBlobAsync declaration and any occurrences/overloads that reference ToUint8 to use ToUInt8 (identify the function by name hashBlobAsync) and run a type-check to ensure there are no other references to the incorrect symbol.use-fireproof/base/react/use-fireproof.ts (1)
42-45:⚠️ Potential issue | 🟠 MajorDownstream hook dependency arrays still use
database.nameinstead of thedatabasereference.This was flagged in the previous review and remains unaddressed: when a caller passes a new
Databaseinstance that happens to share the same.nameas the previous one, the memos foruseDocument,useLiveQuery,useAllDocs, anduseChangeswon't recompute, leaving closures over the stale instance.Proposed fix
- const useDocument = useMemo(() => createUseDocument(database), [database.name, strConfig]); - const useLiveQuery = useMemo(() => createUseLiveQuery(database), [database.name, strConfig]); - const useAllDocs = useMemo(() => createUseAllDocs(database), [database.name, strConfig]); - const useChanges = useMemo(() => createUseChanges(database), [database.name, strConfig]); + const useDocument = useMemo(() => createUseDocument(database), [database, strConfig]); + const useLiveQuery = useMemo(() => createUseLiveQuery(database), [database, strConfig]); + const useAllDocs = useMemo(() => createUseAllDocs(database), [database, strConfig]); + const useChanges = useMemo(() => createUseChanges(database), [database, strConfig]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@use-fireproof/base/react/use-fireproof.ts` around lines 42 - 45, The memoizations for useDocument, useLiveQuery, useAllDocs, and useChanges are keyed on database.name so they won't update if a new Database instance with the same name is passed; update the dependency arrays to use the database reference itself (i.e., change the dependencies for createUseDocument, createUseLiveQuery, createUseAllDocs, createUseChanges from [database.name, strConfig] to [database, strConfig]) so the useMemo recomputes when the Database object changes.cloud/quick-silver/api/qs-api.ts (2)
290-298: [Duplicate]close()does not evict from theconnectionscache.Already flagged in a previous review: after
close(), the resolved (now-dead)QSApiImplremains cached, causing futureQSApi(opts)calls to return a closed instance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 290 - 298, The close() method currently closes the websocket but does not remove the corresponding QSApiImpl from the connections cache, so subsequent QSApi(opts) calls can return a closed instance; update close() (on QSApiImpl) to also evict the instance from the shared connections cache (the same map used by QSApi(opts)) after/while closing the ws, ensuring eviction happens even if close fails (i.e., in both then and catch/finally paths) so the cache no longer holds the dead QSApiImpl.
50-54: [Duplicate] Connection cache keyed only byopts.url.Already flagged in a previous review: when two callers share the same URL but have different
dborauth, the second gets the wrong cached instance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 50 - 54, The cache key currently uses only opts.url in QSApi (connections.get(opts.url).once(...)) which causes incorrect reuse when callers differ by opts.db or opts.auth; change the key to incorporate the distinguishing fields (e.g., a stable stringified key built from opts.url, opts.db and opts.auth or a small helper like makeQsApiKey(opts)) and use that key with KeyedResolvOnce in QSApi, ensuring the serialization handles undefined values consistently so different option combinations produce distinct cache entries for QSApiImpl.cloud/quick-silver/cf/qs-db-store.ts (1)
47-49:webSocketClosedoesn't normalize code 1006 — will throw on abnormal drops.Code 1006 is a reserved WebSocket close code that must not be sent in a
close()frame. The CF runtime synthesizes it for abnormal disconnects. The siblingQSRoomalready handles this (mapping1006 → 1000), butQSDBStorepasses the raw code through.Proposed fix
webSocketClose(ws: WebSocket, code: number, reason: string): void { - ws.close(code, reason); + const safeCode = code === 1006 ? 1000 : code; + ws.close(safeCode, reason); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-store.ts` around lines 47 - 49, The webSocketClose method in QSDBStore should normalize the reserved close code 1006 to a safe sendable code (e.g., 1000) before calling ws.close to avoid throwing on abnormal drops; update webSocketClose(ws: WebSocket, code: number, reason: string) to check if code === 1006 and replace it with 1000 (and ensure reason is a string/default if needed) mirroring the QSRoom mapping behavior so ws.close is never called with 1006.cloud/quick-silver/cf/qs-db-evento.ts (2)
96-121: Wildcard and error handlers lose the original request'stidandarg.Both handlers hardcode
tid: "unknown"andarg: 0, preventing clients from correlating error responses to their pending requests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-evento.ts` around lines 96 - 121, The wildcard and error handlers (EventoType.WildCard handler with hash "qs-unknown" and EventoType.Error handler with hash "qs-error") are hardcoding tid and arg so clients can't correlate responses; update both handlers to populate tid and arg from the original request (use ctx.enRequest?.tid ?? "unknown" and ctx.enRequest?.arg ?? 0 or equivalent) when building the QSResErr payload, preserving the existing error text and satisfying the QSResErr type.
32-38: PUT handler setscidtoreq.key— bothidandcidget the same value.The
cidcolumn is meant to hold a content identifier, butreq.keyis used for bothidandcidparameters. Downstream,QSResQueryRowsurfaces both fields as distinct, so consumers will seeid === cidfor every row.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-evento.ts` around lines 32 - 38, The PUT handler is inserting the wrong value for the cid column (using req.key for both id and cid); update the sql.exec call so the second parameter passed for the cid placeholder uses the content identifier (req.cid or whichever field holds the CID) instead of req.key, keeping id as req.key and data as req.data so QSResQueryRow will reflect distinct id and cid values.cloud/quick-silver/cf/qs-room.ts (3)
20-20: Register anonDeletecallback on theLRUMapto close evicted WebSocket connections.When the LRU cache evicts a store WebSocket entry, the connection is leaked.
@adviser/cement'sLRUMapsupportsonDelete()for this purpose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` at line 20, The LRUMap instantiation for stores currently leaks WebSocket connections on eviction; register an onDelete callback on the stores LRUMap (right after the new LRUMap<string, WebSocket>({ maxEntries: 64 }) assignment) that receives the evicted WebSocket and closes it (e.g., check the socket is not already closed and call ws.close() or ws.terminate()), and also remove any attached listeners if applicable; implement this in the qs-room class where the private readonly stores field is defined so evicted entries are properly cleaned up.
79-101: Cached store WebSocket captures the first caller'sclientWs— responses for other clients are mis-routed.
getStoreWscaches the store WebSocket perdb, but themessageevent handler (Line 97) always sends non-subscribe responses to theclientWsfrom the first call. A second client hitting the samedbwill have its responses routed to the wrong WebSocket. This needs atid → clientWsrouting map.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` around lines 79 - 101, getStoreWs currently caches a single store WebSocket but its message handler closes over the first call's clientWs, causing responses for subsequent clients to be mis-routed; update getStoreWs (and the handler created inside the stores.getSet callback) to maintain a per-db routing map from tid→clientWs (e.g., a Map<string,WebSocket> stored alongside the cached storeWs), have the handler decode incoming messages, extract the tid (for non-subscribe messages) and forward the payload to the mapped clientWs instead of the closed-over clientWs, and ensure you update/cleanup the map when clients connect/disconnect (and keep existing subscribe handling via dispatchSubscribeEvent and isQSEvtSubscribe).
104-121:webSocketClosedestroys all store connections when any single client disconnects.Lines 108-115 iterate and close every cached store WebSocket, then clear the entire store map. In a multi-client room, one client disconnecting kills the store connections for all other active clients.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` around lines 104 - 121, The webSocketClose function is closing every cached store WebSocket and clearing this.stores when any client disconnects; change it to only close store connections owned by the disconnecting client. Update the data model so you track store sockets per client (e.g., add a client->Set<WebSocket> map or convert this.stores to map storeId->Set<clientWs>), then in webSocketClose(ws, ...) iterate only the set for that ws (closing each storeWs and removing it from shared sets/counts) and delete that client entry instead of this.stores.clear(); ensure you do not close a storeWs still referenced by other clients by using per-client sets or reference counts before calling storeWs.close.core/quick-silver/idx-service/service.ts (1)
189-194:ReadableStreamwill hang ifcollection.each()rejects.If the Dexie iteration throws, the stream controller is never closed, leaving consumers waiting indefinitely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/idx-service/service.ts` around lines 189 - 194, The ReadableStream start(ctrl) assumes await collection.each(...) never rejects, so if Dexie iteration throws the stream is left open; wrap the await collection.each(...) call in a try/catch inside the start(ctrl) function and on error call ctrl.error(err) (or at minimum ctrl.close() after signaling) so the stream consumer is not left hanging; update the ReadableStream creation around the start(ctrl) block that calls collection.each to ensure any thrown errors are caught and passed to ctrl.error.core/quick-silver/fireproof.test.ts (2)
9-13: No cleanup for the singleton test —"db"entry persists in the singleton map.The test creates a singleton via
fireproof("db")but never callsdestroy(), so the"db"entry remains indatabasesByNamefor the lifetime of the test process. AddafterEachto clean up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/fireproof.test.ts` around lines 9 - 13, The test creates a singleton via fireproof("db") but doesn't remove it from the internal singleton map, leaving the "db" entry in databasesByName across tests; add an afterEach cleanup that calls destroy() (or the exported teardown method used to remove entries) for the created instance or explicitly remove the "db" entry from databasesByName after the test. Locate the test using the fireproof(...) call and ensure afterEach calls the instance.destroy() (or appropriate cleanup function) to clear databasesByName so the singleton won't persist between tests.
331-341:closetest has no cleanup — risks listener accumulation on re-runs.The
"close-test"singleton is never destroyed, so in watch modefireproof("close-test")returns the already-closed instance with the prioronClosedlistener still attached.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/fireproof.test.ts` around lines 331 - 341, The test creates a singleton via fireproof("close-test") and registers an onClosed listener but never ensures the singleton is unique or cleaned up, causing listener accumulation on re-runs; update the test to instantiate a fresh DB per run (e.g., give fireproof a unique name/nonce) or explicitly remove the listener from the instance after the assertion, making changes around the fireproof("close-test") call and the onClosed registration/cleanup so the listener does not persist across test runs (refer to the fireproof(...) call, db.onClosed(...), and db.close()).core/quick-silver/quick-silver.ts (3)
179-182:⚠️ Potential issue | 🟠 Major
put()silently returns{ id: undefined }when the write failsIf
bulk()skips all docs (e.g., everyIdxService().addToIdxcall fails),writtenDocsis empty andids[0]isundefined, violating theDocResponse.id: stringcontract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 179 - 182, The put() method can return { id: undefined } when bulk() writes no documents (writtenDocs empty), violating DocResponse.id: string; update put() to detect when bulk() returns no ids (e.g., ids.length === 0 or !ids[0]) and throw or reject with a descriptive error instead of returning undefined. Locate put(), bulk(), and the path where IdxService().addToIdx may skip docs and ensure either bulk() surfaces a clear error when no docs are written or put() validates the returned ids and throws a meaningful Error (including context like doc.name or operation) so consumers never receive an undefined id.
341-350:⚠️ Potential issue | 🟡 MinorPartial stream read without
cancel()may leave the underlying cursor openAfter consuming one item to check
hasEntries, the reader lock is released but the stream is never cancelled, which can leave an IndexedDB cursor/transaction open.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 341 - 350, The one-item probe uses getReader() and read() but never cancels the stream, leaving the underlying cursor/transaction open; modify the block around IdxService().query(...).getReader() so that after reading you call and await checkReader.cancel() (always or when !done) to close the stream, and still release the lock in the finally block (i.e., await checkReader.cancel() before checkReader.releaseLock()); keep existing error handling around rCheck and ensure cancel() is awaited to close the underlying cursor.
341-438:⚠️ Potential issue | 🟠 MajorSecondary indexes are never rebuilt after the initial
query()— new writes/deletes are invisibleThe
!hasEntriesguard builds the index only once. After the firstquery()populates it, every subsequent call skips the rebuild entirely, so documents written or deleted after that point are absent from query results.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 341 - 438, The rebuild logic incorrectly runs only when hasEntries is false, so secondary indexes never reflect later writes/deletes; change the flow in QuickSilver (around IdxService().query / rCheck, hasEntries and the if (!hasEntries) block) to always (re)compute or update the index before returning query results—either remove the hasEntries guard so the primaryReader loop that calls this.get, findQSDocMeta and IdxService().addToIdx always executes, or replace it with a staleness check that forces rebuilding; ensure primaryReader.releaseLock() and checkReader.releaseLock() still run in finally blocks and keep the existing addToIdx call to write MetaEntry entries.
🧹 Nitpick comments (11)
core/quick-silver/cid-storage/dexie.ts (2)
32-39:sizeis a redundant alias fortotal
totalis alreadydata.byteLengthimmediately afterdatais allocated and filled;sizeadds no new information.♻️ Proposed cleanup
const data = new Uint8Array(total); let offset = 0; for (const chunk of chunks) { data.set(chunk, offset); offset += chunk.byteLength; } - const size = data.byteLength; const commit = async (cid: string): Promise<Result<void>> => { return this.commitSeq.add(async () => exception2Result(async (): Promise<Result<void>> => { const existing = await this.blobs.get(cid); if (!existing) { - await this.blobs.add({ cid, data, size, created: new Date() }, cid); + await this.blobs.add({ cid, data, size: total, created: new Date() }, cid); } return Result.Ok(undefined); }), ); }; // ... - return Result.Ok({ commit, rollback, size }); + return Result.Ok({ commit, rollback, size: total });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/dexie.ts` around lines 32 - 39, The variable `size` is redundant because `total` already equals `data.byteLength`; remove the `size` alias and replace any subsequent uses of `size` with `total` (or directly use `data.byteLength`) in the functions handling chunk assembly (search for `total`, `data`, `size`, and `chunks` in this module), ensuring no references to `size` remain and tests/typing still pass.
14-78: Consider exposing aclose()method to release the Dexie connection
DexieStorageBackendImplholds an open Dexie database but provides no way to close it. Callers (especially tests or short-lived contexts) cannot cleanly release the IndexedDB connection, which can block re-opening a same-named database and cause flaky test behaviour.♻️ Suggested addition
+ async close(): Promise<void> { + this.dexie.close(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/dexie.ts` around lines 14 - 78, DexieStorageBackendImpl keeps the Dexie connection open; add a public close() method that calls this.dexie.close() to release the IndexedDB connection, update the StorageBackend interface (and any callers/tests) to include/expect close(): void (or Promise<void> if you prefer async), and ensure any pending resources (e.g., commitSeq) are resolved/cleaned before/after closing; implement close inside DexieStorageBackendImpl to call this.dexie.close() and perform any necessary cleanup so tests can reliably reopen a same-named DB.core/quick-silver/cid-storage/test-noble.html (1)
40-43: CDN imports have no version pinsAll four
esm.shURLs omit version specifiers (e.g.,@noble/hashes/sha2instead of@noble/hashes@1.x/sha2). Unpinned CDN imports mean any upstream breaking release silently changes the behaviour of this test page.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/cid-storage/test-noble.html` around lines 40 - 43, The CDN imports for sha256, CID, raw, and createDigest are unpinned and risk breakage; update each esm.sh URL to include explicit package versions (e.g., `@noble/hashes`@<version>/sha2, multiformats@<version>/cid, multiformats@<version>/codecs/raw, multiformats@<version>/hashes/digest) so imports for symbols sha256, CID, raw, and create (createDigest) reference fixed releases; pick the current tested exact versions (not floating ranges) and replace the four import URLs accordingly.core/quick-silver/package.json (1)
40-44: Inconsistent workspace version specifiers.
core-gateways-indexeddbandcore-types-baseuseworkspace:*whilecore-runtimeandcore-device-iduseworkspace:0.0.0. Pick one convention for all workspace references.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/package.json` around lines 40 - 44, The package.json has inconsistent workspace specifiers for dependencies: change "@fireproof/core-gateways-indexeddb", "@fireproof/core-types-base", "@fireproof/core-runtime", and "@fireproof/core-device-id" to use a single convention (either all "workspace:*" or all "workspace:0.0.0") so they are consistent; update the four dependency entries in core/quick-silver/package.json (the listed package keys) to the chosen specifier and commit that change.core/quick-silver/envelope.ts (1)
66-68:isQSEmitIdxValueMetanaming is inconsistent with its type (QSIdxValueMeta) and the naming pattern of the sibling guards.The three sibling guards follow a clear pattern where the guard name matches the type:
isQSFileMeta→QSFileMeta,isQSDeviceMeta→QSDeviceMeta,isQSDocMeta→QSDocMeta. The "Emit" prefix inisQSEmitIdxValueMetadoes not appear in the type aliasQSIdxValueMeta, breaking this convention. This mismatch is already acknowledged in the codebase—the function is imported with an alias incore/quick-silver/quick-silver.ts:40asisQSEmitIdxValueMeta as isQSIdxValueMeta,to work around the naming discrepancy.Rename to
isQSIdxValueMetato align with the established pattern and eliminate the aliased import.Proposed fix
-export function isQSEmitIdxValueMeta(x: unknown): x is QSIdxValueMeta { +export function isQSIdxValueMeta(x: unknown): x is QSIdxValueMeta {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/envelope.ts` around lines 66 - 68, The guard function is misnamed: change the exported function name isQSEmitIdxValueMeta to isQSIdxValueMeta so it matches the QSIdxValueMeta type and the naming pattern of sibling guards (isQSFileMeta, isQSDeviceMeta, isQSDocMeta); update the function declaration and any exports, and remove the aliasing import that currently maps isQSEmitIdxValueMeta as isQSIdxValueMeta (e.g. in core/quick-silver/quick-silver.ts) so all usages/imports refer directly to isQSIdxValueMeta.core/quick-silver/fireproof.test.ts (1)
323-328: Unnecessarydb.del("no-score")—afterEachalready callsdestroy().The manual deletion at Line 327 is redundant since the
afterEachhook destroys the entire database. Not harmful, but adds noise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/fireproof.test.ts` around lines 323 - 328, The test contains an unnecessary explicit deletion call db.del("no-score") at the end of the "skips docs without the queried field" it-block; remove that line so the test relies on the existing afterEach hook (which calls destroy()) to clean up the DB, keeping the it test focused and noise-free.core/quick-silver/idx-service/service.ts (1)
112-117: Redundant.then(() => undefined)inputtransaction helper.The intermediate
.then(() => undefined)is immediately followed by another.then(() => Result.Ok(undefined)), making the first one a no-op.Proposed fix
put: (entry: IdxEntry) => dtx.idx .put(entry) - .then(() => undefined) .then(() => Result.Ok(undefined)) .catch((e) => Result.Err(e as Error)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/idx-service/service.ts` around lines 112 - 117, The put helper in the transaction (the put: (entry: IdxEntry) => dtx.idx.put(entry) ...) contains a redundant .then(() => undefined) before .then(() => Result.Ok(undefined)); remove the intermediate .then(() => undefined) so the promise chain becomes dtx.idx.put(entry).then(() => Result.Ok(undefined)).catch((e) => Result.Err(e as Error)) to eliminate the no-op and keep behavior intact.cloud/quick-silver/cf/qs-db-store.ts (1)
39-43: Replaceconsole.logwith structured logging viathis.sthis.logger.Static analysis flags these as unexpected console statements. The
sthisinstance already provides a logger — use it for consistency with the rest of the codebase and to get proper log levels.Proposed fix
- console.log("[QSDBStore] rejected string message"); + this.sthis.logger.Warn().Msg("[QSDBStore] rejected string message"); ... - console.log("[QSDBStore] dispatching binary message, bytes:", (msg as ArrayBuffer).byteLength); + this.sthis.logger.Debug().Int("bytes", (msg as ArrayBuffer).byteLength).Msg("[QSDBStore] dispatching binary message");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-store.ts` around lines 39 - 43, Replace the two console.log calls in QSDBStore with the structured logger on the instance: use this.sthis.logger.warn (or error) to log the rejected string message and include contextual fields (tid/arg) when calling sendProvider.send, and use this.sthis.logger.debug or info to log the "dispatching binary message" line including the byte length from (msg as ArrayBuffer).byteLength; remove the console.log lines and ensure messages follow the existing logger usage pattern on sthis.logger for consistency.core/quick-silver/idx-service/primary-key-strategy.ts (1)
50-60:getDocFileMetacan be simplified withArray.filter.The
reduceis functionally equivalent to a simple filter.Proposed simplification
getDocFileMeta(meta: MetaEntry[] = []): (QSDocMeta | QSFileMeta)[] { - return meta.reduce( - (cids, m) => { - if (isQSDocMeta(m) || isQSFileMeta(m)) { - cids.push(m); - } - return cids; - }, - [] as (QSDocMeta | QSFileMeta)[], - ); + return meta.filter((m): m is QSDocMeta | QSFileMeta => isQSDocMeta(m) || isQSFileMeta(m)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/idx-service/primary-key-strategy.ts` around lines 50 - 60, The getDocFileMeta function currently uses Array.reduce to collect entries that satisfy isQSDocMeta or isQSFileMeta; replace that reduce with a direct Array.filter on the meta parameter (keeping the default param MetaEntry[] = []) and return type (QSDocMeta | QSFileMeta)[] so the function simply returns meta.filter(m => isQSDocMeta(m) || isQSFileMeta(m)); this preserves behavior but simplifies the implementation and improves readability.cloud/quick-silver/cf/qs-room-evento.ts (1)
52-52: Replaceconsole.logwith structured logger.Static analysis flags
console.logat Lines 52, 71, and 85. Use thesthis.loggerfrom the enclosingLazyscope for consistent log levels and structured output.Also applies to: 71-71, 85-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room-evento.ts` at line 52, Replace the three plain console.log calls with the structured logger available on the Lazy scope (use this.logger) so logs are emitted with consistent levels and structured output; locate the console.log occurrences in qs-room-evento.ts (the register/subscribe handlers within the Lazy closure) and change them to use this.logger (e.g., this.logger.info or this.logger.debug) and include the same context fields (tid and db) as structured metadata instead of concatenated strings.core/quick-silver/quick-silver.ts (1)
267-270: Preferthis.loggeroverconsole.logfor indexing failures🛠️ Proposed fix
- console.log("Failed to index doc with id:", id, "error:", rIdx.Err()); + this.logger.Error().Str("id", id).Err(rIdx.Err()).Msg("Failed to index doc");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 267 - 270, The code is using console.log for indexing failures (rIdx.isErr()) which bypasses the class logger; replace the console.log call with this.logger.error and include the id and rIdx.Err() (or rIdx.Err().message if needed) so failures are logged consistently; update the block around rIdx.isErr() in quick-silver.ts (the place referencing rIdx, id and Err()) to call this.logger.error with a clear message and the error details instead of console.log.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/3rd-party/src/App.tsx`:
- Around line 9-11: The QuickSilver constructor requires a QuickSilverOpts with
a required sthis: SuperThis, but databaseFactory currently constructs new
QuickSilver({ name }) causing TS2741; update the databaseFactory implementation
to provide a valid sthis when constructing QuickSilver (e.g., obtain or create
the SuperThis instance from the surrounding context or dependency injection and
pass new QuickSilver({ name, sthis })) so QuickSilverOpts is satisfied; ensure
the symbol names QuickSilver, QuickSilverOpts, sthis and databaseFactory are
used to locate and update the code.
In `@cloud/quick-silver/api/cli.ts`:
- Line 65: The map callbacks are implicitly typed causing TS errors; explicitly
annotate the parameters: for the line creating ops use args.keys.map((key:
string) => ({ key })) so key is string and the result matches QSGet[], and for
the other map use (pair: [string, unknown]) or destructure ( [k, v]: [string,
unknown] ) in that callback so the pair parameter is explicitly typed (replace
unknown with a more specific type if available in scope).
- Around line 56-61: The multioption declarations use the wrong element type:
change the type argument for multioption in the keys and pkg options so they
produce string[] by importing array from cmd-ts and using array(string) instead
of string; locate the multioption calls for keys and pkg and replace their type
parameter with array(string) (ensure you add the import for array from
"cmd-ts").
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 84-91: The withTimeout function currently calls
timeouted(...).then(...) without a .catch() and may pass undefined into new
Error via result.error?.message; update withTimeout to append a .catch() to
handle rejections from timeouted (ensure you call unreg() and writer.abort with
an Error created from the caught error) and in the .then branch guard the error
message by using a fallback string (e.g. result.error?.message ||
String(result.error) || "unknown error") when calling writer.abort; reference
the withTimeout method, timeouted call, isTimeout/isError checks, unreg(), and
writer.abort(...) when making these changes.
- Around line 52-53: The declared return type Promise<QSApiImpl> is incorrect
because connections.get(opts.url).once(...) returns QSApiImpl synchronously;
change QSApi to be async and use an async factory so the cache stores a settled
Promise: update the function signature for QSApi to async and call
connections.get(opts.url).once(async () => new QSApiImpl(opts)), then await and
return the result from once (i.e., return await
connections.get(opts.url).once(...)) so the implementation, QSApi, and its use
of QSApiImpl align with Promise<QSApiImpl>.
In `@cloud/quick-silver/cf/qs-db-evento.ts`:
- Around line 79-81: The unbounded SQL query using sql.exec that creates the
cursor (SELECT id, cid, data, synced FROM docs) can exhaust Durable Object
memory; update the query in the code that declares cursor to include a LIMIT
(and optional OFFSET) so results are bounded (e.g., use LIMIT ? with a parameter
or a constant page size) and wire the limit parameter into the sql.exec call;
also ensure any callers (related to QSQuery handling) can pass/receive
pagination parameters to iterate pages later.
In `@cloud/quick-silver/cf/qs-room-evento.ts`:
- Around line 69-74: The unregister handler currently performs
room(ctx).unregisterSubscription(clientWs(ctx), req.tid) and returns
Result.Ok(EventoResult.Continue) without sending any acknowledgement to the
client; update the handler for QSReqUnregisterSubscribe to send a
QSResUnregisterSubscribe acknowledgement back to the client (using the same
clientWs(ctx) send mechanism used by the register handler) that includes tid and
a success/failure status, or if silence is intentional add a clarifying comment
above the handler stating it is fire-and-forget.
- Around line 95-120: The wildcard handler (Evento with hash "qs-room-unknown")
and the error handler (hash "qs-room-error") currently hardcode tid: "unknown"
and arg: 0; change both to extract tid and arg from ctx.enRequest when present
(e.g. use ctx.enRequest?.tid ?? "unknown" and ctx.enRequest?.arg ?? 0) and
include those values in the QSResErr sent via ctx.send.send so clients can
correlate responses to pending requests while still falling back to the original
defaults.
In `@cloud/quick-silver/cf/qs-send-provider.ts`:
- Around line 16-19: The send method (async send<T>) directly calls
this.ws.send(...) which can throw (e.g., DOMException when socket is
CLOSED/CLOSING), breaking the Promise<Result<T>> contract; wrap the
this.ws.send(...) and encoding call in a try/catch inside send, catch any error
and return a Result.Err containing the error (or a mapped error object/message)
instead of letting the exception propagate, and on success return Result.Ok(res
as T) as before; reference the send method, this.ws.send, Result.Ok, QSOpRes and
this.sthis.ende.cbor.encodeToUint8 to locate and update the code.
In `@cloud/quick-silver/types/index.ts`:
- Around line 35-37: QSQuery is currently defined as an empty type so the
generated type guard (isQSQuery) will match any plain object; update the QSQuery
definition to include a discriminant literal field (e.g., add type:
literal('"QSReqQuery"') or type: '"QSReqQuery"') so isQSQuery narrows correctly
and can be used reliably in unions—modify the QSQuery declaration (and any
related constructors/parsers) to include that literal `type` field and adjust
any callers that construct QSQuery values to set type: "QSReqQuery".
In `@core/quick-silver/cid-storage/test-noble.html`:
- Line 36: The button ID run500 is misleading because the click handler invokes
runTest(50) (the 50 MB test); either rename the button ID to run50 and update
any selectors that reference run500, or change the handler/label to runTest(500)
and update the visible text so the ID, label and invoked runTest size all match;
search for the HTML button id "run500" and the runTest(...) call to make the
change consistently.
In `@core/quick-silver/envelope.ts`:
- Around line 55-62: QSIdxValueMeta.payload currently omits the shared storage
metadata added by PayloadBase (unlike QSFileMeta, QSDeviceMeta, QSDocMeta), so
update QSIdxValueMeta to either include .and(PayloadBase) on its payload type to
add idxName/cid/url/created for consistency, or if omission is intentional, add
a clear inline comment above QSIdxValueMeta explaining why emit values
intentionally do not carry PayloadBase metadata; reference QSIdxValueMeta and
PayloadBase (and the other meta types QSFileMeta, QSDeviceMeta, QSDocMeta) when
making the change.
In `@core/quick-silver/idx-service/service.ts`:
- Around line 144-158: In destroyDb, the Result from this.query is logged on
error but execution continues to call r.Ok() which will throw; change the
control flow in destroyDb so you bail out immediately when r.isErr() (either
throw a descriptive Error including r.Err() or return a rejected Promise) and
only call consumeStream(r.Ok(), ...) when r.isOk(); update the block around r,
consumeStream, and the subsequent deleteFromIdx loop to be guarded by r.isOk()
to avoid calling r.Ok() on an error.
In `@core/quick-silver/package.json`:
- Line 31: The gptdoc string uses the wrong package and API: change the import
to come from '@fireproof/core-quick-silver' instead of 'use-fireproof' and
update the referenced symbol "fireproof" and subsequent usage (db creation,
db.put/get/del/subscribe/query) to match this package's actual exported API
(replace the symbol "fireproof" with the real export name from
`@fireproof/core-quick-silver` and adjust method calls to the package's
client/factory accordingly, keeping the rest of the example semantics).
In `@core/quick-silver/quick-silver.ts`:
- Around line 493-510: The emitted row currently always uses value: {} as R
which discards any emit() payload; change the emitted value to use the stored
emit payload from the index (idxValue.payload.emitValue) instead, e.g. set
value: idxValue.payload.emitValue as R (with a fallback to {} as R if emitValue
is undefined) inside the loop that builds emitted rows; keep the existing guards
(the isQSIdxValueMeta check and the type.errors skip) and only replace the value
assignment so emitted receives the actual emitted data.
- Around line 234-236: The error construction iterates over all entries and
calls r.Err().message even for Ok results causing a TypeError; update the throw
to only extract messages from error results by filtering docAndFiles for
r.isErr() and then mapping to the error message (e.g., use r.Err() or
unwrapErr() only on the filtered items), so the thrown Error uses only actual
error messages and avoids calling .Err() on Ok values.
- Around line 128-146: The returned placeholder for a missing file is a plain
object ({ found: false }) which breaks Result-based checks later; in the
Promise.map that calls CIDStorageService().get(m.payload.url) (inside the array
assigned to cids) replace the plain object return with a Result.Ok wrapper
(i.e., return Result.Ok({ found: false })) so callers like the loop that calls
cids.shift() and then cidItem.isErr() / r.found work without TypeError; update
the branch that handles !getFile.found in that anonymous async map to return
Result.Ok({ meta: m, found: false }) or Result.Ok({ found: false }) consistent
with other successful Result.Ok shapes.
- Around line 441-462: The select filter functions (select, IdxEntry handling,
compareKeys usage) expect keys to be JSON-encoded (they use JSON.stringify and
JSON.parse) but the index-build path stores keys with String(key), causing
mismatches and parse errors; fix by changing the index-write logic in both the
field-based index builder and the shared key storage used by map-function path
to store keys as JSON.stringify(key) instead of String(key) so that stored keys
are valid JSON tokens and the select functions (opts.key, opts.keys, opts.range,
opts.prefix) can reliably JSON.parse/compare them.
---
Outside diff comments:
In `@cloud/3rd-party/src/App.tsx`:
- Around line 44-55: The UI currently dereferences properties on a
possibly-detached attach (created via useFireproof/databaseFactory) causing
runtime errors; update the render and click handlers to only access
attach.ctx.tokenAndClaims and attach.error when attach.state indicates those
exist (e.g., guard with attach.state === "ready" before reading
attach.ctx.tokenAndClaims.tokenAndClaims.token and before calling
attach.ctx.tokenAndClaims.reset(), and check attach.state === "error" and
attach.error != null before reading attach.error.message), or remove the entire
cloud-attach card if you intend to show only the local QuickSilver path
(references: attach, useFireproof, createAttach, attach.ctx.tokenAndClaims,
attach.error.message).
---
Duplicate comments:
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 290-298: The close() method currently closes the websocket but
does not remove the corresponding QSApiImpl from the connections cache, so
subsequent QSApi(opts) calls can return a closed instance; update close() (on
QSApiImpl) to also evict the instance from the shared connections cache (the
same map used by QSApi(opts)) after/while closing the ws, ensuring eviction
happens even if close fails (i.e., in both then and catch/finally paths) so the
cache no longer holds the dead QSApiImpl.
- Around line 50-54: The cache key currently uses only opts.url in QSApi
(connections.get(opts.url).once(...)) which causes incorrect reuse when callers
differ by opts.db or opts.auth; change the key to incorporate the distinguishing
fields (e.g., a stable stringified key built from opts.url, opts.db and
opts.auth or a small helper like makeQsApiKey(opts)) and use that key with
KeyedResolvOnce in QSApi, ensuring the serialization handles undefined values
consistently so different option combinations produce distinct cache entries for
QSApiImpl.
In `@cloud/quick-silver/cf/qs-db-evento.ts`:
- Around line 96-121: The wildcard and error handlers (EventoType.WildCard
handler with hash "qs-unknown" and EventoType.Error handler with hash
"qs-error") are hardcoding tid and arg so clients can't correlate responses;
update both handlers to populate tid and arg from the original request (use
ctx.enRequest?.tid ?? "unknown" and ctx.enRequest?.arg ?? 0 or equivalent) when
building the QSResErr payload, preserving the existing error text and satisfying
the QSResErr type.
- Around line 32-38: The PUT handler is inserting the wrong value for the cid
column (using req.key for both id and cid); update the sql.exec call so the
second parameter passed for the cid placeholder uses the content identifier
(req.cid or whichever field holds the CID) instead of req.key, keeping id as
req.key and data as req.data so QSResQueryRow will reflect distinct id and cid
values.
In `@cloud/quick-silver/cf/qs-db-store.ts`:
- Around line 47-49: The webSocketClose method in QSDBStore should normalize the
reserved close code 1006 to a safe sendable code (e.g., 1000) before calling
ws.close to avoid throwing on abnormal drops; update webSocketClose(ws:
WebSocket, code: number, reason: string) to check if code === 1006 and replace
it with 1000 (and ensure reason is a string/default if needed) mirroring the
QSRoom mapping behavior so ws.close is never called with 1006.
In `@cloud/quick-silver/cf/qs-room.ts`:
- Line 20: The LRUMap instantiation for stores currently leaks WebSocket
connections on eviction; register an onDelete callback on the stores LRUMap
(right after the new LRUMap<string, WebSocket>({ maxEntries: 64 }) assignment)
that receives the evicted WebSocket and closes it (e.g., check the socket is not
already closed and call ws.close() or ws.terminate()), and also remove any
attached listeners if applicable; implement this in the qs-room class where the
private readonly stores field is defined so evicted entries are properly cleaned
up.
- Around line 79-101: getStoreWs currently caches a single store WebSocket but
its message handler closes over the first call's clientWs, causing responses for
subsequent clients to be mis-routed; update getStoreWs (and the handler created
inside the stores.getSet callback) to maintain a per-db routing map from
tid→clientWs (e.g., a Map<string,WebSocket> stored alongside the cached
storeWs), have the handler decode incoming messages, extract the tid (for
non-subscribe messages) and forward the payload to the mapped clientWs instead
of the closed-over clientWs, and ensure you update/cleanup the map when clients
connect/disconnect (and keep existing subscribe handling via
dispatchSubscribeEvent and isQSEvtSubscribe).
- Around line 104-121: The webSocketClose function is closing every cached store
WebSocket and clearing this.stores when any client disconnects; change it to
only close store connections owned by the disconnecting client. Update the data
model so you track store sockets per client (e.g., add a client->Set<WebSocket>
map or convert this.stores to map storeId->Set<clientWs>), then in
webSocketClose(ws, ...) iterate only the set for that ws (closing each storeWs
and removing it from shared sets/counts) and delete that client entry instead of
this.stores.clear(); ensure you do not close a storeWs still referenced by other
clients by using per-client sets or reference counts before calling
storeWs.close.
In `@core/gateways/indexeddb/gateway-impl.ts`:
- Line 170: Remove the leftover debug console.log call by deleting the line
console.log(">>>>>>", url.toString()); in the gateway implementation so only the
structured logger remains; locate the print in the same block where the
structured log that references url.toString() is invoked (gateway-impl.ts, near
the code handling URL construction/usage) and ensure no other console.log debug
prints remain in that function or surrounding methods.
In `@core/quick-silver/cid-storage/test-noble.html`:
- Around line 141-143: The page output is garbled because log("%c✓ CIDs match",
"color:green") uses a console-only "%c" formatting token but log() appends to
textContent; replace that call with a console logging call or remove the "%c"
styling so the on-page function isn't given console-format tokens—e.g., change
the log(...) invocation to console.log("%c✓ CIDs match", "color:green") or to
log("✓ CIDs match") to avoid the duplicated/garbled output while keeping the
existing DOM update that appends the <span class="ok"> line.
In `@core/quick-silver/fireproof.test.ts`:
- Around line 9-13: The test creates a singleton via fireproof("db") but doesn't
remove it from the internal singleton map, leaving the "db" entry in
databasesByName across tests; add an afterEach cleanup that calls destroy() (or
the exported teardown method used to remove entries) for the created instance or
explicitly remove the "db" entry from databasesByName after the test. Locate the
test using the fireproof(...) call and ensure afterEach calls the
instance.destroy() (or appropriate cleanup function) to clear databasesByName so
the singleton won't persist between tests.
- Around line 331-341: The test creates a singleton via fireproof("close-test")
and registers an onClosed listener but never ensures the singleton is unique or
cleaned up, causing listener accumulation on re-runs; update the test to
instantiate a fresh DB per run (e.g., give fireproof a unique name/nonce) or
explicitly remove the listener from the instance after the assertion, making
changes around the fireproof("close-test") call and the onClosed
registration/cleanup so the listener does not persist across test runs (refer to
the fireproof(...) call, db.onClosed(...), and db.close()).
In `@core/quick-silver/idx-service/service.ts`:
- Around line 189-194: The ReadableStream start(ctrl) assumes await
collection.each(...) never rejects, so if Dexie iteration throws the stream is
left open; wrap the await collection.each(...) call in a try/catch inside the
start(ctrl) function and on error call ctrl.error(err) (or at minimum
ctrl.close() after signaling) so the stream consumer is not left hanging; update
the ReadableStream creation around the start(ctrl) block that calls
collection.each to ensure any thrown errors are caught and passed to ctrl.error.
In `@core/quick-silver/quick-silver.ts`:
- Around line 179-182: The put() method can return { id: undefined } when bulk()
writes no documents (writtenDocs empty), violating DocResponse.id: string;
update put() to detect when bulk() returns no ids (e.g., ids.length === 0 or
!ids[0]) and throw or reject with a descriptive error instead of returning
undefined. Locate put(), bulk(), and the path where IdxService().addToIdx may
skip docs and ensure either bulk() surfaces a clear error when no docs are
written or put() validates the returned ids and throws a meaningful Error
(including context like doc.name or operation) so consumers never receive an
undefined id.
- Around line 341-350: The one-item probe uses getReader() and read() but never
cancels the stream, leaving the underlying cursor/transaction open; modify the
block around IdxService().query(...).getReader() so that after reading you call
and await checkReader.cancel() (always or when !done) to close the stream, and
still release the lock in the finally block (i.e., await checkReader.cancel()
before checkReader.releaseLock()); keep existing error handling around rCheck
and ensure cancel() is awaited to close the underlying cursor.
- Around line 341-438: The rebuild logic incorrectly runs only when hasEntries
is false, so secondary indexes never reflect later writes/deletes; change the
flow in QuickSilver (around IdxService().query / rCheck, hasEntries and the if
(!hasEntries) block) to always (re)compute or update the index before returning
query results—either remove the hasEntries guard so the primaryReader loop that
calls this.get, findQSDocMeta and IdxService().addToIdx always executes, or
replace it with a staleness check that forces rebuilding; ensure
primaryReader.releaseLock() and checkReader.releaseLock() still run in finally
blocks and keep the existing addToIdx call to write MetaEntry entries.
In `@core/runtime/utils.ts`:
- Line 618: The function signature for hashBlobAsync uses the
misspelled/undefined type ToUint8; replace that with the correctly imported
ToUInt8 from `@fireproof/core-types-base` so TypeScript compiles. Update the
hashBlobAsync declaration and any occurrences/overloads that reference ToUint8
to use ToUInt8 (identify the function by name hashBlobAsync) and run a
type-check to ensure there are no other references to the incorrect symbol.
In `@use-fireproof/base/react/use-fireproof.ts`:
- Around line 42-45: The memoizations for useDocument, useLiveQuery, useAllDocs,
and useChanges are keyed on database.name so they won't update if a new Database
instance with the same name is passed; update the dependency arrays to use the
database reference itself (i.e., change the dependencies for createUseDocument,
createUseLiveQuery, createUseAllDocs, createUseChanges from [database.name,
strConfig] to [database, strConfig]) so the useMemo recomputes when the Database
object changes.
---
Nitpick comments:
In `@cloud/quick-silver/cf/qs-db-store.ts`:
- Around line 39-43: Replace the two console.log calls in QSDBStore with the
structured logger on the instance: use this.sthis.logger.warn (or error) to log
the rejected string message and include contextual fields (tid/arg) when calling
sendProvider.send, and use this.sthis.logger.debug or info to log the
"dispatching binary message" line including the byte length from (msg as
ArrayBuffer).byteLength; remove the console.log lines and ensure messages follow
the existing logger usage pattern on sthis.logger for consistency.
In `@cloud/quick-silver/cf/qs-room-evento.ts`:
- Line 52: Replace the three plain console.log calls with the structured logger
available on the Lazy scope (use this.logger) so logs are emitted with
consistent levels and structured output; locate the console.log occurrences in
qs-room-evento.ts (the register/subscribe handlers within the Lazy closure) and
change them to use this.logger (e.g., this.logger.info or this.logger.debug) and
include the same context fields (tid and db) as structured metadata instead of
concatenated strings.
In `@core/quick-silver/cid-storage/dexie.ts`:
- Around line 32-39: The variable `size` is redundant because `total` already
equals `data.byteLength`; remove the `size` alias and replace any subsequent
uses of `size` with `total` (or directly use `data.byteLength`) in the functions
handling chunk assembly (search for `total`, `data`, `size`, and `chunks` in
this module), ensuring no references to `size` remain and tests/typing still
pass.
- Around line 14-78: DexieStorageBackendImpl keeps the Dexie connection open;
add a public close() method that calls this.dexie.close() to release the
IndexedDB connection, update the StorageBackend interface (and any
callers/tests) to include/expect close(): void (or Promise<void> if you prefer
async), and ensure any pending resources (e.g., commitSeq) are resolved/cleaned
before/after closing; implement close inside DexieStorageBackendImpl to call
this.dexie.close() and perform any necessary cleanup so tests can reliably
reopen a same-named DB.
In `@core/quick-silver/cid-storage/test-noble.html`:
- Around line 40-43: The CDN imports for sha256, CID, raw, and createDigest are
unpinned and risk breakage; update each esm.sh URL to include explicit package
versions (e.g., `@noble/hashes`@<version>/sha2, multiformats@<version>/cid,
multiformats@<version>/codecs/raw, multiformats@<version>/hashes/digest) so
imports for symbols sha256, CID, raw, and create (createDigest) reference fixed
releases; pick the current tested exact versions (not floating ranges) and
replace the four import URLs accordingly.
In `@core/quick-silver/envelope.ts`:
- Around line 66-68: The guard function is misnamed: change the exported
function name isQSEmitIdxValueMeta to isQSIdxValueMeta so it matches the
QSIdxValueMeta type and the naming pattern of sibling guards (isQSFileMeta,
isQSDeviceMeta, isQSDocMeta); update the function declaration and any exports,
and remove the aliasing import that currently maps isQSEmitIdxValueMeta as
isQSIdxValueMeta (e.g. in core/quick-silver/quick-silver.ts) so all
usages/imports refer directly to isQSIdxValueMeta.
In `@core/quick-silver/fireproof.test.ts`:
- Around line 323-328: The test contains an unnecessary explicit deletion call
db.del("no-score") at the end of the "skips docs without the queried field"
it-block; remove that line so the test relies on the existing afterEach hook
(which calls destroy()) to clean up the DB, keeping the it test focused and
noise-free.
In `@core/quick-silver/idx-service/primary-key-strategy.ts`:
- Around line 50-60: The getDocFileMeta function currently uses Array.reduce to
collect entries that satisfy isQSDocMeta or isQSFileMeta; replace that reduce
with a direct Array.filter on the meta parameter (keeping the default param
MetaEntry[] = []) and return type (QSDocMeta | QSFileMeta)[] so the function
simply returns meta.filter(m => isQSDocMeta(m) || isQSFileMeta(m)); this
preserves behavior but simplifies the implementation and improves readability.
In `@core/quick-silver/idx-service/service.ts`:
- Around line 112-117: The put helper in the transaction (the put: (entry:
IdxEntry) => dtx.idx.put(entry) ...) contains a redundant .then(() => undefined)
before .then(() => Result.Ok(undefined)); remove the intermediate .then(() =>
undefined) so the promise chain becomes dtx.idx.put(entry).then(() =>
Result.Ok(undefined)).catch((e) => Result.Err(e as Error)) to eliminate the
no-op and keep behavior intact.
In `@core/quick-silver/package.json`:
- Around line 40-44: The package.json has inconsistent workspace specifiers for
dependencies: change "@fireproof/core-gateways-indexeddb",
"@fireproof/core-types-base", "@fireproof/core-runtime", and
"@fireproof/core-device-id" to use a single convention (either all "workspace:*"
or all "workspace:0.0.0") so they are consistent; update the four dependency
entries in core/quick-silver/package.json (the listed package keys) to the
chosen specifier and commit that change.
In `@core/quick-silver/quick-silver.ts`:
- Around line 267-270: The code is using console.log for indexing failures
(rIdx.isErr()) which bypasses the class logger; replace the console.log call
with this.logger.error and include the id and rIdx.Err() (or rIdx.Err().message
if needed) so failures are logged consistently; update the block around
rIdx.isErr() in quick-silver.ts (the place referencing rIdx, id and Err()) to
call this.logger.error with a clear message and the error details instead of
console.log.
| databaseFactory: (name) => { | ||
| return new QuickSilver({ name }); | ||
| }, |
There was a problem hiding this comment.
QuickSilver constructor requires sthis — build-breaking pipeline failure.
QuickSilverOpts has sthis: SuperThis as a required field, but the factory only provides { name }. The build fails with TS2741.
🐛 Proposed fix
+import { ensureSuperThis } from "@fireproof/core-runtime";
import { QuickSilver } from "../../../core/quick-silver/quick-silver.js";
+const sthis = ensureSuperThis();
+
function App() {
const { database, attach } = useFireproof("fireproof-5-party", {
databaseFactory: (name) => {
- return new QuickSilver({ name });
+ return new QuickSilver({ name, sthis });
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| databaseFactory: (name) => { | |
| return new QuickSilver({ name }); | |
| }, | |
| import { ensureSuperThis } from "@fireproof/core-runtime"; | |
| import { QuickSilver } from "../../../core/quick-silver/quick-silver.js"; | |
| const sthis = ensureSuperThis(); | |
| function App() { | |
| const { database, attach } = useFireproof("fireproof-5-party", { | |
| databaseFactory: (name) => { | |
| return new QuickSilver({ name, sthis }); | |
| }, |
🧰 Tools
🪛 GitHub Actions: `@fireproof/core`
[error] 10-10: TypeScript error TS2741: Property 'sthis' is missing in type '{ name: string; }' but required in type 'QuickSilverOpts'.
🪛 GitHub Check: CI Core
[failure] 10-10:
Property 'sthis' is missing in type '{ name: string; }' but required in type 'QuickSilverOpts'.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/3rd-party/src/App.tsx` around lines 9 - 11, The QuickSilver constructor
requires a QuickSilverOpts with a required sthis: SuperThis, but databaseFactory
currently constructs new QuickSilver({ name }) causing TS2741; update the
databaseFactory implementation to provide a valid sthis when constructing
QuickSilver (e.g., obtain or create the SuperThis instance from the surrounding
context or dependency injection and pass new QuickSilver({ name, sthis })) so
QuickSilverOpts is satisfied; ensure the symbol names QuickSilver,
QuickSilverOpts, sthis and databaseFactory are used to locate and update the
code.
| }, | ||
| handler: async (args) => { | ||
| const api = await makeApi(args); | ||
| const ops: QSGet[] = args.keys.map((key) => ({ key })); |
There was a problem hiding this comment.
Implicit any in .map() callbacks — two static analysis failures.
key (line 65) and pair (line 88) are implicitly typed as any because TypeScript cannot infer them in strict mode. Add explicit annotations.
🐛 Proposed fix
- const ops: QSGet[] = args.keys.map((key) => ({ key }));
+ const ops: QSGet[] = args.keys.map((key: string) => ({ key }));- const ops: QSPut[] = args.pkg.map((pair) => {
+ const ops: QSPut[] = args.pkg.map((pair: string) => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const ops: QSGet[] = args.keys.map((key) => ({ key })); | |
| const ops: QSGet[] = args.keys.map((key: string) => ({ key })); |
🧰 Tools
🪛 GitHub Check: CI Core
[failure] 65-65:
Parameter 'key' implicitly has an 'any' type.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/cli.ts` at line 65, The map callbacks are implicitly
typed causing TS errors; explicitly annotate the parameters: for the line
creating ops use args.keys.map((key: string) => ({ key })) so key is string and
the result matches QSGet[], and for the other map use (pair: [string, unknown])
or destructure ( [k, v]: [string, unknown] ) in that callback so the pair
parameter is explicitly typed (replace unknown with a more specific type if
available in scope).
cloud/quick-silver/api/qs-api.ts
Outdated
| export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> { | ||
| return connections.get(opts.url).once(() => new QSApiImpl(opts)); |
There was a problem hiding this comment.
QSApi return type is Promise<QSApiImpl> but the factory is synchronous — compilation failure.
KeyedResolvOnce.once(() => new QSApiImpl(opts)) returns QSApiImpl directly (not a Promise<QSApiImpl>), so the declared return type is wrong. The static analysis failure confirms this. Making the factory async aligns the types and ensures the cache operates over a settled Promise.
🐛 Proposed fix
export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> {
- return connections.get(opts.url).once(() => new QSApiImpl(opts));
+ return connections.get(opts.url).once(async () => new QSApiImpl(opts));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> { | |
| return connections.get(opts.url).once(() => new QSApiImpl(opts)); | |
| export function QSApi(opts: QSApiOpts): Promise<QSApiImpl> { | |
| return connections.get(opts.url).once(async () => new QSApiImpl(opts)); | |
| } |
🧰 Tools
🪛 GitHub Check: CI Core
[failure] 53-53:
Type 'QSApiImpl' is missing the following properties from type 'Promise': then, catch, [Symbol.toStringTag], finally
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/qs-api.ts` around lines 52 - 53, The declared return
type Promise<QSApiImpl> is incorrect because connections.get(opts.url).once(...)
returns QSApiImpl synchronously; change QSApi to be async and use an async
factory so the cache stores a settled Promise: update the function signature for
QSApi to async and call connections.get(opts.url).once(async () => new
QSApiImpl(opts)), then await and return the result from once (i.e., return await
connections.get(opts.url).once(...)) so the implementation, QSApi, and its use
of QSApiImpl align with Promise<QSApiImpl>.
| private withTimeout(done: Future<void>, unreg: () => void, writer: WritableStreamDefaultWriter<unknown>): void { | ||
| timeouted(done.asPromise(), { timeout: this.opts.timeoutMs ?? DEFAULT_TIMEOUT_MS }).then((result) => { | ||
| if (isTimeout(result) || isError(result)) { | ||
| unreg(); | ||
| writer.abort(new Error(isTimeout(result) ? "request timeout" : result.error?.message)); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
withTimeout is missing .catch(), and result.error?.message may be undefined.
Two issues:
timeouted(...).then(...)has no.catch(). Iftimeouteditself rejects, it becomes an unhandled rejection.result.error?.messagecan evaluate toundefined, passingundefinedas theErrormessage string.
🛡️ Proposed fix
private withTimeout(done: Future<void>, unreg: () => void, writer: WritableStreamDefaultWriter<unknown>): void {
timeouted(done.asPromise(), { timeout: this.opts.timeoutMs ?? DEFAULT_TIMEOUT_MS }).then((result) => {
if (isTimeout(result) || isError(result)) {
unreg();
- writer.abort(new Error(isTimeout(result) ? "request timeout" : result.error?.message));
+ writer.abort(new Error(isTimeout(result) ? "request timeout" : (result.error?.message ?? "unknown error")));
}
- });
+ }).catch(() => { /* ignore — done already settled */ });
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/qs-api.ts` around lines 84 - 91, The withTimeout
function currently calls timeouted(...).then(...) without a .catch() and may
pass undefined into new Error via result.error?.message; update withTimeout to
append a .catch() to handle rejections from timeouted (ensure you call unreg()
and writer.abort with an Error created from the caught error) and in the .then
branch guard the error message by using a fallback string (e.g.
result.error?.message || String(result.error) || "unknown error") when calling
writer.abort; reference the withTimeout method, timeouted call,
isTimeout/isError checks, unreg(), and writer.abort(...) when making these
changes.
| "author": "J Chris Anderson", | ||
| "license": "AFL-2.0", | ||
| "homepage": "https://use-fireproof.com", | ||
| "gptdoc": "import { fireproof } from 'use-fireproof'; const db = fireproof('app-db-name'); const ok = await db.put({ anyField: ['any','json'] }); const doc = await db.get(ok.id); await db.del(doc._id); db.subscribe(myRedrawFn); const result = await db.query('anyField', {range : ['a', 'z']}); result.rows.map(({ key }) => key);", |
There was a problem hiding this comment.
gptdoc references the wrong package API.
The gptdoc string shows import { fireproof } from 'use-fireproof' and the use-fireproof hook API, but this package is @fireproof/core-quick-silver. Update it to reflect the actual exported API of this package.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/package.json` at line 31, The gptdoc string uses the wrong
package and API: change the import to come from '@fireproof/core-quick-silver'
instead of 'use-fireproof' and update the referenced symbol "fireproof" and
subsequent usage (db creation, db.put/get/del/subscribe/query) to match this
package's actual exported API (replace the symbol "fireproof" with the real
export name from `@fireproof/core-quick-silver` and adjust method calls to the
package's client/factory accordingly, keeping the rest of the example
semantics).
| const cids = await Promise.all([ | ||
| (async () => { | ||
| const docMeta = this.findQSDocMeta(item.meta); | ||
| if (!docMeta) return Result.Err(new Error(`no doc meta found for id: ${id}`)); | ||
| return CIDStorageService().get(docMeta.payload.url); | ||
| })(), | ||
| ...(item.meta ?? []) | ||
| .filter((m) => isQSFileMeta(m)) | ||
| .map((m) => | ||
| CIDStorageService() | ||
| .get(m.payload.url) | ||
| .then((r) => { | ||
| if (r.isErr()) return r; | ||
| const getFile = r.Ok(); | ||
| if (!getFile.found) return { found: false }; | ||
| return Result.Ok({ meta: m, found: getFile.found, stream: getFile.stream }); | ||
| }), | ||
| ), | ||
| ]); |
There was a problem hiding this comment.
{ found: false } plain object lacks .isErr() — runtime TypeError when a file CID is missing from storage
When !getFile.found, line 142 returns { found: false } — a plain object with no Result methods. After cids.shift() strips the document result, this object ends up in the for loop at line 154, where cidItem.isErr() immediately throws TypeError: cidItem.isErr is not a function. The downstream if (r.found === false) continue guard at line 157 is unreachable.
🛠️ Proposed fix
- if (!getFile.found) return { found: false };
+ if (!getFile.found) return Result.Ok({ found: false });Result.Ok({ found: false }) passes cidItem.isErr() correctly (returns false), and the already-present check on line 157 (if (r.found === false) continue) handles it cleanly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const cids = await Promise.all([ | |
| (async () => { | |
| const docMeta = this.findQSDocMeta(item.meta); | |
| if (!docMeta) return Result.Err(new Error(`no doc meta found for id: ${id}`)); | |
| return CIDStorageService().get(docMeta.payload.url); | |
| })(), | |
| ...(item.meta ?? []) | |
| .filter((m) => isQSFileMeta(m)) | |
| .map((m) => | |
| CIDStorageService() | |
| .get(m.payload.url) | |
| .then((r) => { | |
| if (r.isErr()) return r; | |
| const getFile = r.Ok(); | |
| if (!getFile.found) return { found: false }; | |
| return Result.Ok({ meta: m, found: getFile.found, stream: getFile.stream }); | |
| }), | |
| ), | |
| ]); | |
| const cids = await Promise.all([ | |
| (async () => { | |
| const docMeta = this.findQSDocMeta(item.meta); | |
| if (!docMeta) return Result.Err(new Error(`no doc meta found for id: ${id}`)); | |
| return CIDStorageService().get(docMeta.payload.url); | |
| })(), | |
| ...(item.meta ?? []) | |
| .filter((m) => isQSFileMeta(m)) | |
| .map((m) => | |
| CIDStorageService() | |
| .get(m.payload.url) | |
| .then((r) => { | |
| if (r.isErr()) return r; | |
| const getFile = r.Ok(); | |
| if (!getFile.found) return Result.Ok({ found: false }); | |
| return Result.Ok({ meta: m, found: getFile.found, stream: getFile.stream }); | |
| }), | |
| ), | |
| ]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 128 - 146, The returned
placeholder for a missing file is a plain object ({ found: false }) which breaks
Result-based checks later; in the Promise.map that calls
CIDStorageService().get(m.payload.url) (inside the array assigned to cids)
replace the plain object return with a Result.Ok wrapper (i.e., return
Result.Ok({ found: false })) so callers like the loop that calls cids.shift()
and then cidItem.isErr() / r.found work without TypeError; update the branch
that handles !getFile.found in that anonymous async map to return Result.Ok({
meta: m, found: false }) or Result.Ok({ found: false }) consistent with other
successful Result.Ok shapes.
| if (docAndFiles.some((r) => r.isErr())) { | ||
| throw new Error(`failed to store some docs/files: ${docAndFiles.map((r) => r.Err().message).join("\n ")}`); | ||
| } |
There was a problem hiding this comment.
docAndFiles.map((r) => r.Err().message) is called on all items — crashes on Ok results
When some results are errors, the code calls .Err() (and then .message) on every element including successful Ok results, where .Err() returns undefined. This causes a TypeError during error-message construction, masking the original failure with a confusing crash.
🛠️ Proposed fix
- throw new Error(`failed to store some docs/files: ${docAndFiles.map((r) => r.Err().message).join("\n ")}`);
+ throw new Error(`failed to store some docs/files: ${docAndFiles.filter((r) => r.isErr()).map((r) => r.Err().message).join("\n ")}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (docAndFiles.some((r) => r.isErr())) { | |
| throw new Error(`failed to store some docs/files: ${docAndFiles.map((r) => r.Err().message).join("\n ")}`); | |
| } | |
| if (docAndFiles.some((r) => r.isErr())) { | |
| throw new Error(`failed to store some docs/files: ${docAndFiles.filter((r) => r.isErr()).map((r) => r.Err().message).join("\n ")}`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 234 - 236, The error
construction iterates over all entries and calls r.Err().message even for Ok
results causing a TypeError; update the throw to only extract messages from
error results by filtering docAndFiles for r.isErr() and then mapping to the
error message (e.g., use r.Err() or unwrapErr() only on the filtered items), so
the thrown Error uses only actual error messages and avoids calling .Err() on Ok
values.
| const select = (() => { | ||
| if (opts?.key !== undefined) { | ||
| const s = JSON.stringify(opts.key); | ||
| return (e: IdxEntry) => e.keys[0] === s; | ||
| } | ||
| if (opts?.keys && (opts.keys as unknown[]).length > 0) { | ||
| const sset = new Set((opts.keys as unknown[]).map((k) => JSON.stringify(k))); | ||
| return (e: IdxEntry) => sset.has(e.keys[0]); | ||
| } | ||
| if (opts?.range) { | ||
| const [lo, hi] = opts.range as [K, K]; | ||
| return (e: IdxEntry) => { | ||
| const k = JSON.parse(e.keys[0]) as K; | ||
| return compareKeys(k, lo) >= 0 && compareKeys(k, hi) <= 0; | ||
| }; | ||
| } | ||
| if (opts?.prefix !== undefined) { | ||
| const prefix = String(opts.prefix); | ||
| return (e: IdxEntry) => String(JSON.parse(e.keys[0])).startsWith(prefix); | ||
| } | ||
| return undefined; | ||
| })(); |
There was a problem hiding this comment.
Key serialization is inconsistent between index-build and select filters — string-keyed lookups always return empty results
When building the secondary index, values are stored with String(key) (e.g., the string "alice" is stored as bare alice with no JSON encoding). The select filters, however, serialize with JSON.stringify (JSON.stringify("alice") → '"alice"'). These will never match for string-valued fields, silently returning zero results.
Additionally, the range and prefix filters call JSON.parse(e.keys[0]), which throws a SyntaxError when the stored value is a plain string like alice (not valid JSON).
The fix is to use JSON.stringify uniformly when writing keys to the index so that JSON.parse in the filters round-trips correctly:
🛠️ Proposed fix — use `JSON.stringify` for key storage
In the field-based index-build path (line 378):
- if (key !== undefined) pending.set(String(key), [{ k: key as IndexKeyType }]);
+ if (key !== undefined) pending.set(JSON.stringify(key), [{ k: key as IndexKeyType }]);In the shared key storage line used by both field and map-function paths (line 430):
- keys: Array.isArray(emits[0].k) ? emits[0].k.map(String) : [String(emits[0].k)],
+ keys: [JSON.stringify(emits[0].k)],This makes every stored key a valid JSON token, ensuring JSON.parse round-trips cleanly and JSON.stringify comparisons in the select filters match correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 441 - 462, The select filter
functions (select, IdxEntry handling, compareKeys usage) expect keys to be
JSON-encoded (they use JSON.stringify and JSON.parse) but the index-build path
stores keys with String(key), causing mismatches and parse errors; fix by
changing the index-write logic in both the field-based index builder and the
shared key storage used by map-function path to store keys as
JSON.stringify(key) instead of String(key) so that stored keys are valid JSON
tokens and the select functions (opts.key, opts.keys, opts.range, opts.prefix)
can reliably JSON.parse/compare them.
| const idxValues = entry.meta?.filter((m) => isQSIdxValueMeta(m)); | ||
| for (const idxValue of idxValues ?? []) { | ||
| if (idxValue instanceof type.errors) continue; | ||
| if (idxValue.payload.keys.length === 0) continue; | ||
| let key: K; | ||
| if (idxValue.payload.keys.length === 1) { | ||
| key = idxValue.payload.keys[0] as K; | ||
| } else { | ||
| key = idxValue.payload.keys as unknown as K; | ||
| } | ||
|
|
||
| emitted.push({ | ||
| id: doc._id, | ||
| key, | ||
| value: {} as R, | ||
| doc, | ||
| }); | ||
| } |
There was a problem hiding this comment.
value: {} as R discards the emitted value — map-function results always return an empty object
idxValue.payload.emitValue is stored in the index but never read back into the result row. Any call to emit(key, value) in a map function produces a row with value: {} instead of the actual emitted value, breaking the standard Map/Reduce contract.
🛠️ Proposed fix
- emitted.push({
- id: doc._id,
- key,
- value: {} as R,
- doc,
- });
+ emitted.push({
+ id: doc._id,
+ key,
+ value: (idxValue.payload.emitValue ?? {}) as R,
+ doc,
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/quick-silver/quick-silver.ts` around lines 493 - 510, The emitted row
currently always uses value: {} as R which discards any emit() payload; change
the emitted value to use the stored emit payload from the index
(idxValue.payload.emitValue) instead, e.g. set value: idxValue.payload.emitValue
as R (with a fallback to {} as R if emitValue is undefined) inside the loop that
builds emitted rows; keep the existing guards (the isQSIdxValueMeta check and
the type.errors skip) and only replace the value assignment so emitted receives
the actual emitted data.
5aa6885 to
c619ab6
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (13)
cloud/quick-silver/cf/qs-encoder.ts (1)
13-17:⚠️ Potential issue | 🟠 Major
encodesilently swallows CBOR decode errors by returningResult.Ok().When
decodeUint8fails (line 15), returningResult.Ok()causes downstream consumers to receive a successful but empty result, silently losing the message.Proposed fix — propagate the error
async encode(args: ArrayBuffer): Promise<Result<unknown>> { const decoded = this.sthis.ende.cbor.decodeUint8<unknown>(new Uint8Array(args)); - if (decoded.isErr()) return Result.Ok(); + if (decoded.isErr()) return decoded; return Result.Ok(decoded.Ok()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-encoder.ts` around lines 13 - 17, The encode method currently swallows CBOR decode failures by returning Result.Ok() when decoded.isErr() is true; update encode (the async function encode in qs-encoder.ts) to propagate the decode error instead of returning success: when decodeUint8 (called as this.sthis.ende.cbor.decodeUint8) fails, return a Result.Err containing the decoded error (or wrap it appropriately) so callers receive the failure rather than an empty Ok.cloud/quick-silver/cf/qs-db-store.ts (1)
48-50:⚠️ Potential issue | 🟡 Minor
webSocketClosedoesn't normalize code 1006 — will throw on abnormal drops.Code 1006 is reserved and cannot be used in
ws.close(). The fix inQSRoom(line 107) should also be applied here.🛡️ Proposed fix
webSocketClose(ws: WebSocket, code: number, reason: string): void { - ws.close(code, reason); + const safeCode = code === 1006 ? 1000 : code; + ws.close(safeCode, reason); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-store.ts` around lines 48 - 50, The webSocketClose function uses ws.close(code, reason) with no normalization for code 1006 (reserved) and will throw on abnormal closes; update webSocketClose to mirror the fix in QSRoom by checking if the passed code equals 1006 and replace it with 1000 (or another valid close code) before calling ws.close, ensuring the function signature webSocketClose(ws: WebSocket, code: number, reason: string) still receives the normalized code and reason and avoids using the reserved 1006 value.cloud/quick-silver/api/qs-api.ts (3)
290-298:⚠️ Potential issue | 🟠 Major
close()doesn't evict fromconnections— futureQSApi()calls return the closed instance.After
close(), the resolved (now dead)QSApiImplremains cached. Any subsequentQSApi(opts)with the same URL reuses the stale instance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 290 - 298, The close() method currently closes the websocket but leaves the QSApiImpl in the connections cache so future QSApi() calls return a dead instance; update close() (which calls connect()) to evict this instance from the connections map (connections.delete(this.url) or equivalent keyed by the QSApiImpl's URL) and then close the ws; ensure eviction happens both on success and in the catch path so the stale instance is removed even if connect() rejects.
50-54:⚠️ Potential issue | 🟠 MajorConnection cache keyed only by URL — second caller with a different
dbgets the wrong instance.
connections.get(opts.url)ignoresopts.dbandopts.auth, so a second call with a different database silently reuses the first caller'sQSApiImpl.Quick fix — composite key
- return connections.get(opts.url).once(() => new QSApiImpl(opts)); + return connections.get(`${opts.url}::${opts.db}`).once(() => new QSApiImpl(opts));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 50 - 54, The cache key for KeyedResolvOnce is currently only opts.url causing QSApi(opts: QSApiOpts) to return the wrong QSApiImpl for different opts.db or opts.auth; update the key passed to connections.get(...) to be a composite unique key (e.g., join or stringify opts.url, opts.db, and opts.auth) so each distinct combination of URL, database, and auth creates/gets its own QSApiImpl instance; ensure you reference the existing QSApi function, QSApiOpts, and QSApiImpl when building the composite key so the change is localized to the connections.get(...) call.
84-91:⚠️ Potential issue | 🟡 Minor
withTimeoutis missing.catch()andresult.error?.messagemay beundefined.If
timeouted(...)rejects, the rejection is unhandled. Additionally,result.error?.messageevaluates toundefinedfor non-timeout errors, creating anErrorwithundefinedas the message.🛡️ Proposed fix
private withTimeout(done: Future<void>, unreg: () => void, writer: WritableStreamDefaultWriter<unknown>): void { timeouted(done.asPromise(), { timeout: this.opts.timeoutMs ?? DEFAULT_TIMEOUT_MS }).then((result) => { if (isTimeout(result) || isError(result)) { unreg(); - writer.abort(new Error(isTimeout(result) ? "request timeout" : result.error?.message)); + writer.abort(new Error(isTimeout(result) ? "request timeout" : (result.error?.message ?? "unknown error"))); } - }); + }).catch(() => { /* done already settled */ }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 84 - 91, withTimeout currently calls timeouted(...).then(...) but doesn't handle promise rejections and may create an Error with undefined message; update the call to handle both rejection and resolved error cases: add a .catch(...) to handle promise rejection (call unreg() and abort the writer with an Error containing the rejection's message or a safe fallback), and when the resolved result is isError(result) ensure you use a non-undefined message (e.g., result.error?.message ?? String(result.error) ?? "unknown error") when constructing the Error for writer.abort; reference the withTimeout function, timeouted(), isTimeout(), isError(), unreg, and writer.abort to locate and update the logic accordingly.core/quick-silver/quick-silver.ts (4)
505-510:⚠️ Potential issue | 🟠 Major
value: {} as Ralways discards the emitted value from map functions.
idxValue.payload.emitValueis stored in the index but never used here. Anyemit(key, value)call produces a row wherevalueis always{}.🛡️ Proposed fix
emitted.push({ id: doc._id, key, - value: {} as R, + value: (idxValue.payload.emitValue ?? {}) as R, doc, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 505 - 510, The emitted entry is discarding the actual emitted value by using value: {} as R; replace this so the push uses the stored emit value from the index (idxValue.payload.emitValue) instead of a hardcoded empty object. Locate the emitted.push call (the object with id: doc._id, key, value, doc) in quick-silver.ts and set value to the indexed emit payload (e.g., use idxValue.payload.emitValue cast to R), falling back to an empty object or undefined only if that payload is missing.
143-143:⚠️ Potential issue | 🔴 Critical
{ found: false }plain object lacks.isErr()— causesTypeErrorin the consuming loop.Line 156 calls
cidItem.isErr()on every item incids. The plain{ found: false }returned here has noResultmethods; the call throwsTypeError: cidItem.isErr is not a functionat runtime.🛡️ Proposed fix
- if (!getFile.found) return { found: false }; + if (!getFile.found) return Result.Ok({ found: false });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` at line 143, The returned plain object { found: false } lacks the Result API used later (cidItem.isErr()), causing a TypeError; instead return a Result-like value with the same methods (or propagate the existing Result) so consumers can call .isErr(). In practice replace the early return of { found: false } with the original Result from getFile (or wrap the payload in the same Result constructor used elsewhere) so the returned value for that branch implements .isErr(), .unwrap(), etc., ensuring compatibility with the cids processing loop and the cidItem.isErr() calls.
235-237:⚠️ Potential issue | 🔴 Critical
.Err()is called on all results, including successfulOkvalues — causesTypeError.
docAndFiles.map((r) => r.Err().message)calls.Err()on every element. ForOkresults,.Err()returnsundefined, and accessing.messageon it throwsTypeError, masking the original error.🛡️ Proposed fix
- throw new Error(`failed to store some docs/files: ${docAndFiles.map((r) => r.Err().message).join("\n ")}`); + throw new Error(`failed to store some docs/files: ${docAndFiles.filter((r) => r.isErr()).map((r) => r.Err().message).join("\n ")}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 235 - 237, The thrown Error builds its message by calling .Err() on every element of docAndFiles, which calls .Err() on Ok values and leads to a TypeError; update the construction in quick-silver.ts (around the docAndFiles check) to only extract messages from actual error results (e.g., first filter docAndFiles for r.isErr() and then map to r.Err().message or use a safe accessor like r.Err()?.message) so only real errors are referenced when creating the Error; ensure you reference the existing docAndFiles array and the isErr()/Err() result methods when making this change.
353-439:⚠️ Potential issue | 🟠 MajorSecondary indexes are permanently frozen after the first
query()call.Once any entry exists (
hasEntries = true), the!hasEntriesrebuild is skipped for all subsequent calls. Documents written viabulk/putor deleted viadelafter the first query are never reflected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 353 - 439, The code uses a persistent hasEntries flag so the reconstruction block (the IdxService().query + primaryReader loop in quick-silver.ts guarded by if (!hasEntries)) runs only once and never reflects later put/bulk/del changes; fix by making index rebuilds occur after mutations: either remove the hasEntries short-circuit so the rebuild runs on each query, or mark indexes dirty from the mutation paths (update bulk, put, del to clear/reset hasEntries or set a per-index dirty flag) and have the query path check that flag before skipping the rebuild; update references to hasEntries, the reconstruction block around IdxService().query/getReader, and mutation methods bulk/put/del to ensure the flag is reset (or remove the flag and always run the reconstruction).cloud/quick-silver/cf/qs-room.ts (3)
20-20:⚠️ Potential issue | 🟠 MajorEvicted WebSocket connections are not closed.
The
LRUMapevicts entries silently; the evictedWebSocketis never closed, leaking connections as the 64-entry cache fills.Register an
onDeletecallback during construction:this.stores.onDelete((ws) => { try { ws.close(); } catch { /* already closed */ } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` at line 20, The LRUMap named stores currently evicts WebSocket values silently, leaking connections; in the class constructor attach an onDelete callback on the stores LRUMap that closes the evicted WebSocket (handle exceptions if already closed) so evicted sockets are cleaned up. Locate the private readonly stores = new LRUMap<string, WebSocket> and add stores.onDelete(handler) where the handler calls ws.close() in a try/catch to swallow already-closed errors.
105-122:⚠️ Potential issue | 🟠 Major
webSocketClosetears down ALL store connections when any single client disconnects.Closing and clearing
this.storeson a per-client disconnect kills shared store connections that may still be in use by other clients.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` around lines 105 - 122, The webSocketClose function currently closes and clears this.stores for any client disconnect, killing shared store connections; change it to only close the store socket(s) associated with the disconnecting client and do not clear the entire this.stores collection. Locate webSocketClose and replace the blanket this.stores.forEach/this.stores.clear logic with logic that (a) looks up the store WebSocket(s) tied to the provided ws (or update the data model to track associations, e.g., a Map from client WebSocket to its store WebSockets), (b) closes only those storeWs via storeWs.close(safeCode, reason) with the existing try/catch, and (c) remove only those associations from the store tracking structure — leave other shared store connections intact.
80-103:⚠️ Potential issue | 🔴 CriticalCached store
WebSocketcaptures the first caller'sclientWs— responses are mis-routed to wrong clients.The
messagehandler installed at line 91 always forwards non-subscribe responses to theclientWscaptured from the firstgetStoreWscall. Any subsequent caller for the samedbwill have their responses sent to the first client.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-room.ts` around lines 80 - 103, The cached store WebSocket created in getStoreWs currently closes over the first caller's clientWs so all non-subscribe messages get forwarded to that initial client; fix by not capturing clientWs in the store-level message handler — instead maintain a per-db client set (e.g., Map<string, Set<WebSocket>>) and on each new caller add its clientWs to that set, remove it on close, and have the storeWs message handler (inside getStoreWs where stores.getSet creates the storeWs) decode events (use sthis.ende.cbor.decodeUint8) and either call dispatchSubscribeEvent for subscribe events or forward the message to every WebSocket in the per-db client set; ensure you update add/remove logic when client connections close so you don’t leak sockets.cloud/quick-silver/cf/qs-send-provider.ts (1)
16-19:⚠️ Potential issue | 🟠 Major
ws.send()can throw — unhandled exception breaks theResult<T>contract.
WebSocket.send()throws aDOMExceptionwhen the socket isCLOSEDorCLOSING. The exception bypasses thePromise<Result<T>>return type, leaving callers unprotected.🛡️ Proposed fix
async send<T>(_ctx: HandleTriggerCtx<ArrayBuffer, unknown, unknown>, res: unknown): Promise<Result<T>> { - this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)); - return Result.Ok(res as T); + try { + this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)); + return Result.Ok(res as T); + } catch (e) { + return Result.Err(e as Error); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-send-provider.ts` around lines 16 - 19, The send method can throw when this.ws.send(...) is called, breaking the Promise<Result<T>> contract; modify async send<T>(_ctx: HandleTriggerCtx<...>, res: unknown): Promise<Result<T>> in qs-send-provider.ts to guard and handle send errors: either check this.ws.readyState for OPEN before sending and return Result.Err if not open, or wrap the this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)) call in a try/catch and return Result.Err(error) on exception, otherwise return Result.Ok(res as T); reference the send method, this.ws, this.sthis.ende.cbor.encodeToUint8, QSOpRes and Result.Ok/Result.Err when making the change.
🧹 Nitpick comments (3)
core/quick-silver/quick-silver.ts (1)
117-119:findQSDocMetais public but is an internal helper — consider making itprivate.Nothing outside
QuickSilverneeds to call this method, and exposing it creates an unintended API surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/quick-silver/quick-silver.ts` around lines 117 - 119, The method findQSDocMeta is an internal helper and should be made private: change its declaration from "findQSDocMeta(meta: MetaEntry[] = []): QSDocMeta | undefined" to "private findQSDocMeta(...)" inside the QuickSilver class, update any internal callers to continue using it as before, search the repo for external uses of findQSDocMeta and remove or refactor them if found, and run the type-check/build to fix any visibility-related errors (adjust tests or exports if they relied on this public API).cloud/quick-silver/cf/qs-db-store.ts (1)
39-43: Prefer structured logging over bareconsole.login CF Worker code.The project has a
sthis.loggerinfrastructure;console.logbypasses it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/cf/qs-db-store.ts` around lines 39 - 43, Replace the bare console.log calls with the project's structured logger (sthis.logger): use sthis.logger.warn/error for the rejected string branch when calling sendProvider.send to report "rejected string message" and include contextual fields (e.g., tid: "unknown"), and use sthis.logger.info or debug when dispatching the binary message to log the byteLength (refer to the existing expression (msg as ArrayBuffer).byteLength) so logs go through the sthis.logger pipeline instead of console.log; update both occurrences in the QSDBStore handling code that also references sendProvider and the QSResErr payload.cloud/quick-silver/api/qs-api.ts (1)
63-80:Lazy-memoizedconnect()permanently caches a dead WebSocket after disconnection.
Lazyresolves once and never re-evaluates. There is nows.oncloseor post-connectionws.onerrorhandler: once the WebSocket drops, all futureconnect()calls return the already-closed socket.get/put/querycallers hit the 30-second timeout;subscribestreams hang indefinitely with no error signal to the caller.♻️ Suggested approach — invalidate Lazy on close/error
- readonly connect = Lazy(() => { + private _connect = Lazy(() => { const ws = new WebSocket(this.opts.url); const opened = new Future<WebSocket>(); ws.binaryType = "arraybuffer"; ws.onopen = () => { opened.resolve(ws); }; ws.onerror = (e) => opened.reject(new Error(String(e))); + ws.onclose = () => { this._connect = Lazy(() => this._makeConnection()); }; ws.onmessage = (evt) => { /* ... */ }; return opened.asPromise(); }); + + readonly connect = () => this._connect();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/quick-silver/api/qs-api.ts` around lines 63 - 80, connect currently uses Lazy to permanently cache a single WebSocket promise, so when the socket later closes callers keep getting a dead connection; change connect to track the live connection promise and invalidate it on socket close/error: attach ws.onclose and a post-open ws.onerror handler to call opened.reject(new Error(...)) (or reject/resolve a separate controller promise) and clear the cached promise so future connect() calls re-create a new WebSocket; update the Lazy usage (or replace it) so the cached value (the Future/Promise referenced by opened) is set to undefined/reset when ws.onclose or ws.onerror fires, and ensure existing onMessage.invoke handling remains intact (references: connect, Lazy, ws.onclose, ws.onerror, opened, onMessage.invoke).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/quick-silver/api/cli.ts`:
- Around line 159-161: The current promise rejection handler for run(...) uses
.catch(console.error) which logs errors but leaves the process exit code as
zero; change the catch to explicitly log the error and exit non‑zero (e.g.,
.catch(err => { console.error(err); process.exit(1); })) so CLI failures produce
a failing exit code. Make this change where run(...) is invoked.
- Around line 89-94: The JSON.parse call inside the args.pkg.map in the put
handler is unguarded and will throw a SyntaxError for malformed JSON; update the
mapping in the ops construction (the args.pkg.map callback that returns QSPut
objects and uses sthis.ende.cbor.encodeToUint8) to catch JSON.parse errors for
each pair, validate/try-catch parsing the slice(comma + 1), and when parsing
fails throw a new Error with a clear, user-friendly message that includes the
offending key/pair and the parse error message so the CLI reports a helpful
error instead of a raw stack trace.
In `@cloud/quick-silver/cf/qs-room.ts`:
- Around line 82-102: The current use of this.stores.getSet lets concurrent
callers open duplicate store WebSocket connections; replace the getSet usage
with a deduping in-flight cache (KeyedResolvOnce<WebSocket> or a Map<string,
Promise<WebSocket>>) so all concurrent getStoreWs(db) callers await the same
Promise. Move the async factory that calls
this.env.QS_DB_STORE.get(id).fetch(...) and creates/storeWs into the cached
Promise, keep the storeWs.accept() and storeWs.addEventListener(...) logic
inside that single factory, and ensure failures remove the key from the cache so
retries can occur; keep calls to this.dispatchSubscribeEvent(db, msg) and
clientWs.send(...) unchanged.
---
Duplicate comments:
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 290-298: The close() method currently closes the websocket but
leaves the QSApiImpl in the connections cache so future QSApi() calls return a
dead instance; update close() (which calls connect()) to evict this instance
from the connections map (connections.delete(this.url) or equivalent keyed by
the QSApiImpl's URL) and then close the ws; ensure eviction happens both on
success and in the catch path so the stale instance is removed even if connect()
rejects.
- Around line 50-54: The cache key for KeyedResolvOnce is currently only
opts.url causing QSApi(opts: QSApiOpts) to return the wrong QSApiImpl for
different opts.db or opts.auth; update the key passed to connections.get(...) to
be a composite unique key (e.g., join or stringify opts.url, opts.db, and
opts.auth) so each distinct combination of URL, database, and auth creates/gets
its own QSApiImpl instance; ensure you reference the existing QSApi function,
QSApiOpts, and QSApiImpl when building the composite key so the change is
localized to the connections.get(...) call.
- Around line 84-91: withTimeout currently calls timeouted(...).then(...) but
doesn't handle promise rejections and may create an Error with undefined
message; update the call to handle both rejection and resolved error cases: add
a .catch(...) to handle promise rejection (call unreg() and abort the writer
with an Error containing the rejection's message or a safe fallback), and when
the resolved result is isError(result) ensure you use a non-undefined message
(e.g., result.error?.message ?? String(result.error) ?? "unknown error") when
constructing the Error for writer.abort; reference the withTimeout function,
timeouted(), isTimeout(), isError(), unreg, and writer.abort to locate and
update the logic accordingly.
In `@cloud/quick-silver/cf/qs-db-store.ts`:
- Around line 48-50: The webSocketClose function uses ws.close(code, reason)
with no normalization for code 1006 (reserved) and will throw on abnormal
closes; update webSocketClose to mirror the fix in QSRoom by checking if the
passed code equals 1006 and replace it with 1000 (or another valid close code)
before calling ws.close, ensuring the function signature webSocketClose(ws:
WebSocket, code: number, reason: string) still receives the normalized code and
reason and avoids using the reserved 1006 value.
In `@cloud/quick-silver/cf/qs-encoder.ts`:
- Around line 13-17: The encode method currently swallows CBOR decode failures
by returning Result.Ok() when decoded.isErr() is true; update encode (the async
function encode in qs-encoder.ts) to propagate the decode error instead of
returning success: when decodeUint8 (called as this.sthis.ende.cbor.decodeUint8)
fails, return a Result.Err containing the decoded error (or wrap it
appropriately) so callers receive the failure rather than an empty Ok.
In `@cloud/quick-silver/cf/qs-room.ts`:
- Line 20: The LRUMap named stores currently evicts WebSocket values silently,
leaking connections; in the class constructor attach an onDelete callback on the
stores LRUMap that closes the evicted WebSocket (handle exceptions if already
closed) so evicted sockets are cleaned up. Locate the private readonly stores =
new LRUMap<string, WebSocket> and add stores.onDelete(handler) where the handler
calls ws.close() in a try/catch to swallow already-closed errors.
- Around line 105-122: The webSocketClose function currently closes and clears
this.stores for any client disconnect, killing shared store connections; change
it to only close the store socket(s) associated with the disconnecting client
and do not clear the entire this.stores collection. Locate webSocketClose and
replace the blanket this.stores.forEach/this.stores.clear logic with logic that
(a) looks up the store WebSocket(s) tied to the provided ws (or update the data
model to track associations, e.g., a Map from client WebSocket to its store
WebSockets), (b) closes only those storeWs via storeWs.close(safeCode, reason)
with the existing try/catch, and (c) remove only those associations from the
store tracking structure — leave other shared store connections intact.
- Around line 80-103: The cached store WebSocket created in getStoreWs currently
closes over the first caller's clientWs so all non-subscribe messages get
forwarded to that initial client; fix by not capturing clientWs in the
store-level message handler — instead maintain a per-db client set (e.g.,
Map<string, Set<WebSocket>>) and on each new caller add its clientWs to that
set, remove it on close, and have the storeWs message handler (inside getStoreWs
where stores.getSet creates the storeWs) decode events (use
sthis.ende.cbor.decodeUint8) and either call dispatchSubscribeEvent for
subscribe events or forward the message to every WebSocket in the per-db client
set; ensure you update add/remove logic when client connections close so you
don’t leak sockets.
In `@cloud/quick-silver/cf/qs-send-provider.ts`:
- Around line 16-19: The send method can throw when this.ws.send(...) is called,
breaking the Promise<Result<T>> contract; modify async send<T>(_ctx:
HandleTriggerCtx<...>, res: unknown): Promise<Result<T>> in qs-send-provider.ts
to guard and handle send errors: either check this.ws.readyState for OPEN before
sending and return Result.Err if not open, or wrap the
this.ws.send(this.sthis.ende.cbor.encodeToUint8(res as QSOpRes)) call in a
try/catch and return Result.Err(error) on exception, otherwise return
Result.Ok(res as T); reference the send method, this.ws,
this.sthis.ende.cbor.encodeToUint8, QSOpRes and Result.Ok/Result.Err when making
the change.
In `@core/quick-silver/quick-silver.ts`:
- Around line 505-510: The emitted entry is discarding the actual emitted value
by using value: {} as R; replace this so the push uses the stored emit value
from the index (idxValue.payload.emitValue) instead of a hardcoded empty object.
Locate the emitted.push call (the object with id: doc._id, key, value, doc) in
quick-silver.ts and set value to the indexed emit payload (e.g., use
idxValue.payload.emitValue cast to R), falling back to an empty object or
undefined only if that payload is missing.
- Line 143: The returned plain object { found: false } lacks the Result API used
later (cidItem.isErr()), causing a TypeError; instead return a Result-like value
with the same methods (or propagate the existing Result) so consumers can call
.isErr(). In practice replace the early return of { found: false } with the
original Result from getFile (or wrap the payload in the same Result constructor
used elsewhere) so the returned value for that branch implements .isErr(),
.unwrap(), etc., ensuring compatibility with the cids processing loop and the
cidItem.isErr() calls.
- Around line 235-237: The thrown Error builds its message by calling .Err() on
every element of docAndFiles, which calls .Err() on Ok values and leads to a
TypeError; update the construction in quick-silver.ts (around the docAndFiles
check) to only extract messages from actual error results (e.g., first filter
docAndFiles for r.isErr() and then map to r.Err().message or use a safe accessor
like r.Err()?.message) so only real errors are referenced when creating the
Error; ensure you reference the existing docAndFiles array and the isErr()/Err()
result methods when making this change.
- Around line 353-439: The code uses a persistent hasEntries flag so the
reconstruction block (the IdxService().query + primaryReader loop in
quick-silver.ts guarded by if (!hasEntries)) runs only once and never reflects
later put/bulk/del changes; fix by making index rebuilds occur after mutations:
either remove the hasEntries short-circuit so the rebuild runs on each query, or
mark indexes dirty from the mutation paths (update bulk, put, del to clear/reset
hasEntries or set a per-index dirty flag) and have the query path check that
flag before skipping the rebuild; update references to hasEntries, the
reconstruction block around IdxService().query/getReader, and mutation methods
bulk/put/del to ensure the flag is reset (or remove the flag and always run the
reconstruction).
---
Nitpick comments:
In `@cloud/quick-silver/api/qs-api.ts`:
- Around line 63-80: connect currently uses Lazy to permanently cache a single
WebSocket promise, so when the socket later closes callers keep getting a dead
connection; change connect to track the live connection promise and invalidate
it on socket close/error: attach ws.onclose and a post-open ws.onerror handler
to call opened.reject(new Error(...)) (or reject/resolve a separate controller
promise) and clear the cached promise so future connect() calls re-create a new
WebSocket; update the Lazy usage (or replace it) so the cached value (the
Future/Promise referenced by opened) is set to undefined/reset when ws.onclose
or ws.onerror fires, and ensure existing onMessage.invoke handling remains
intact (references: connect, Lazy, ws.onclose, ws.onerror, opened,
onMessage.invoke).
In `@cloud/quick-silver/cf/qs-db-store.ts`:
- Around line 39-43: Replace the bare console.log calls with the project's
structured logger (sthis.logger): use sthis.logger.warn/error for the rejected
string branch when calling sendProvider.send to report "rejected string message"
and include contextual fields (e.g., tid: "unknown"), and use sthis.logger.info
or debug when dispatching the binary message to log the byteLength (refer to the
existing expression (msg as ArrayBuffer).byteLength) so logs go through the
sthis.logger pipeline instead of console.log; update both occurrences in the
QSDBStore handling code that also references sendProvider and the QSResErr
payload.
In `@core/quick-silver/quick-silver.ts`:
- Around line 117-119: The method findQSDocMeta is an internal helper and should
be made private: change its declaration from "findQSDocMeta(meta: MetaEntry[] =
[]): QSDocMeta | undefined" to "private findQSDocMeta(...)" inside the
QuickSilver class, update any internal callers to continue using it as before,
search the repo for external uses of findQSDocMeta and remove or refactor them
if found, and run the type-check/build to fix any visibility-related errors
(adjust tests or exports if they relied on this public API).
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (11)
cloud/quick-silver/api/cli.tscloud/quick-silver/api/qs-api.tscloud/quick-silver/cf/package.jsoncloud/quick-silver/cf/qs-db-evento.tscloud/quick-silver/cf/qs-db-store.tscloud/quick-silver/cf/qs-encoder.tscloud/quick-silver/cf/qs-room-evento.tscloud/quick-silver/cf/qs-room.tscloud/quick-silver/cf/qs-send-provider.tscore/quick-silver/quick-silver.tscore/runtime/utils.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- cloud/quick-silver/cf/qs-room-evento.ts
- cloud/quick-silver/cf/qs-db-evento.ts
- cloud/quick-silver/cf/package.json
| const ops: QSPut[] = args.pkg.map((pair) => { | ||
| const comma = pair.indexOf(","); | ||
| if (comma === -1) throw new Error(`invalid --pkg "${pair}": expected key,{json}`); | ||
| const key = pair.slice(0, comma); | ||
| const data = sthis.ende.cbor.encodeToUint8(JSON.parse(pair.slice(comma + 1))) as Uint8Array<ArrayBuffer>; | ||
| return { key, data } satisfies QSPut; |
There was a problem hiding this comment.
JSON.parse in put handler is unguarded — invalid JSON crashes with no user-friendly message.
JSON.parse(pair.slice(comma + 1)) throws SyntaxError for malformed input. The error propagates uncaught through args.pkg.map(...) and the handler, producing a noisy stack trace.
🛡️ Proposed fix
- const data = sthis.ende.cbor.encodeToUint8(JSON.parse(pair.slice(comma + 1))) as Uint8Array<ArrayBuffer>;
+ let parsed: unknown;
+ try {
+ parsed = JSON.parse(pair.slice(comma + 1));
+ } catch {
+ throw new Error(`invalid --pkg "${pair}": JSON parse failed`);
+ }
+ const data = sthis.ende.cbor.encodeToUint8(parsed) as Uint8Array<ArrayBuffer>;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const ops: QSPut[] = args.pkg.map((pair) => { | |
| const comma = pair.indexOf(","); | |
| if (comma === -1) throw new Error(`invalid --pkg "${pair}": expected key,{json}`); | |
| const key = pair.slice(0, comma); | |
| const data = sthis.ende.cbor.encodeToUint8(JSON.parse(pair.slice(comma + 1))) as Uint8Array<ArrayBuffer>; | |
| return { key, data } satisfies QSPut; | |
| const ops: QSPut[] = args.pkg.map((pair) => { | |
| const comma = pair.indexOf(","); | |
| if (comma === -1) throw new Error(`invalid --pkg "${pair}": expected key,{json}`); | |
| const key = pair.slice(0, comma); | |
| let parsed: unknown; | |
| try { | |
| parsed = JSON.parse(pair.slice(comma + 1)); | |
| } catch { | |
| throw new Error(`invalid --pkg "${pair}": JSON parse failed`); | |
| } | |
| const data = sthis.ende.cbor.encodeToUint8(parsed) as Uint8Array<ArrayBuffer>; | |
| return { key, data } satisfies QSPut; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/cli.ts` around lines 89 - 94, The JSON.parse call
inside the args.pkg.map in the put handler is unguarded and will throw a
SyntaxError for malformed JSON; update the mapping in the ops construction (the
args.pkg.map callback that returns QSPut objects and uses
sthis.ende.cbor.encodeToUint8) to catch JSON.parse errors for each pair,
validate/try-catch parsing the slice(comma + 1), and when parsing fails throw a
new Error with a clear, user-friendly message that includes the offending
key/pair and the parse error message so the CLI reports a helpful error instead
of a raw stack trace.
| run(cmd, process.argv.slice(2)) | ||
| .then(() => process.exit(0)) | ||
| .catch(console.error); |
There was a problem hiding this comment.
catch(console.error) doesn't exit with a non-zero code — CLI errors appear as successful exits.
Scripts and CI pipelines relying on exit codes will not detect CLI failures.
🛡️ Proposed fix
run(cmd, process.argv.slice(2))
.then(() => process.exit(0))
- .catch(console.error);
+ .catch((e) => { console.error(e); process.exit(1); });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run(cmd, process.argv.slice(2)) | |
| .then(() => process.exit(0)) | |
| .catch(console.error); | |
| run(cmd, process.argv.slice(2)) | |
| .then(() => process.exit(0)) | |
| .catch((e) => { console.error(e); process.exit(1); }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/api/cli.ts` around lines 159 - 161, The current promise
rejection handler for run(...) uses .catch(console.error) which logs errors but
leaves the process exit code as zero; change the catch to explicitly log the
error and exit non‑zero (e.g., .catch(err => { console.error(err);
process.exit(1); })) so CLI failures produce a failing exit code. Make this
change where run(...) is invoked.
| return (await this.stores.getSet(db, async () => { | ||
| const id = this.env.QS_DB_STORE.idFromName(db); | ||
| const stub = this.env.QS_DB_STORE.get(id); | ||
| const res = await stub.fetch("https://internal/ws", { | ||
| headers: { Upgrade: "websocket" }, | ||
| }); | ||
| // eslint-disable-next-line @typescript-eslint/no-non-null-assertion | ||
| const storeWs = res.webSocket!; | ||
| storeWs.accept(); | ||
| storeWs.addEventListener("message", (evt) => { | ||
| const decoded = this.sthis.ende.cbor.decodeUint8<unknown>(new Uint8Array(evt.data as ArrayBuffer)); | ||
| if (decoded.isErr()) return; | ||
| const msg = decoded.Ok(); | ||
| if (isQSEvtSubscribe(msg)) { | ||
| this.dispatchSubscribeEvent(db, msg); | ||
| } else { | ||
| clientWs.send(evt.data); | ||
| } | ||
| }); | ||
| return storeWs; | ||
| }))!; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the file and understand the context
find . -name "qs-room.ts" -type f 2>/dev/nullRepository: fireproof-storage/fireproof
Length of output: 105
🏁 Script executed:
# Search for LRUMap definition and getSet method
rg -n "class LRUMap|getSet" --type=ts -B2 -A5 | head -100Repository: fireproof-storage/fireproof
Length of output: 776
🏁 Script executed:
# Check if `@adviser/cement` is a dependency and locate its types
fd package.json -exec grep -l "@adviser/cement" {} \; 2>/dev/nullRepository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Look for the stores definition in the codebase
rg -n "stores.*=" --type=ts -B3 -A3 cloud/quick-silver | head -50Repository: fireproof-storage/fireproof
Length of output: 1764
🏁 Script executed:
# Find LRUMap import in qs-room.ts
head -30 cloud/quick-silver/cf/qs-room.ts | rg -n "import|LRUMap"Repository: fireproof-storage/fireproof
Length of output: 621
🏁 Script executed:
# Search for LRUMap class definition in the codebase
rg -n "class LRUMap|export.*LRUMap" --type=tsRepository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Check node_modules for `@adviser/cement` or similar
fd package.json -type f 2>/dev/null | head -5 | xargs cat | grep -A5 -B5 "@adviser\|cement\|lru" 2>/dev/null | head -30Repository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Search for `@adviser/cement` in the project structure and package.json
find . -name "package.json" -type f ! -path "*/node_modules/*" -exec grep -l "@adviser/cement" {} \;Repository: fireproof-storage/fireproof
Length of output: 1387
🏁 Script executed:
# Check the main package.json for `@adviser/cement` version and details
cat package.json | grep -A2 -B2 "@adviser/cement"Repository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Look for node_modules/@adviser/cement if available
find node_modules/@adviser/cement -name "*.d.ts" 2>/dev/null | head -5Repository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Search for LRUMap getSet in node_modules
find node_modules -path "*@adviser/cement*" -name "*.d.ts" -o -name "*.ts" 2>/dev/null | xargs grep -l "getSet" 2>/dev/null | head -5Repository: fireproof-storage/fireproof
Length of output: 572
🏁 Script executed:
# Check the quick-silver/cf package.json for `@adviser/cement` version
cat cloud/quick-silver/cf/package.json | grep -A5 -B5 "@adviser/cement\|dependencies"Repository: fireproof-storage/fireproof
Length of output: 402
🏁 Script executed:
# Search for KeyedResolvOnce in the codebase (mentioned in learnings)
rg -n "KeyedResolvOnce" --type=ts -B3 -A3 | head -60Repository: fireproof-storage/fireproof
Length of output: 4497
🏁 Script executed:
# Search for getSet usage patterns in the codebase to understand expected behavior
rg -n "\.getSet\(" --type=ts -B2 -A5 | head -80Repository: fireproof-storage/fireproof
Length of output: 776
🌐 Web query:
@adviser/cement LRUMap getSet async factory deduplication
💡 Result:
In @adviser/cement, LRUMap.getSet() is async, but it does not deduplicate concurrent misses: it does get(key) first, and only after await createFN(key) does it set(key, val), so two callers hitting the same missing key at the same time will both run createFN. [1]
Deduplicate pattern (cache the in-flight Promise)
Cache a Promise<V> immediately, so all concurrent callers await the same work:
import { LRUMap } from "@adviser/cement";
const inFlight = new LRUMap<string, Promise<number>>({ maxEntries: 100 });
async function getSetDedup<K extends string, V>(
key: K,
factory: (key: K) => Promise<V>,
): Promise<V> {
const existing = inFlight.get(key);
if (existing) return existing;
const p = (async () => {
try {
return await factory(key);
} finally {
// optional: remove promise so next call can refetch, or keep it and swap to a value-cache
inFlight.delete(key);
}
})();
inFlight.set(key, p);
return p;
}If you also want LRU for resolved values, use two caches: LRUMap<K, V> for values and LRUMap<K, Promise<V>> for in-flight work.
Source: LRUMap.getSet() implementation in src/lru-map-set.ts. [1]
References:
[1] https://jsr.io/%40adviser/cement/0.5.23/src/lru-map-set.ts
🏁 Script executed:
# Look for any documentation or comments in the codebase about LRUMap behavior
rg -n "LRUMap" --type=ts -B3 -A3 | head -80Repository: fireproof-storage/fireproof
Length of output: 2009
🏁 Script executed:
# Check if there are any concurrent access issues or race conditions documented
rg -n "concurrent|race|duplicate|dedup" cloud/quick-silver/cf/ --type=ts -i -B2 -A2Repository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Search for similar patterns in the codebase - other places that might need deduplication
rg -n "getSet|\.get\(" --type=ts cloud/quick-silver | head -40Repository: fireproof-storage/fireproof
Length of output: 797
🏁 Script executed:
# Check where/how getStoreWs is called to understand concurrency risk
rg -n "getStoreWs" --type=ts -B3 -A3Repository: fireproof-storage/fireproof
Length of output: 1868
🏁 Script executed:
# Look at the webSocketMessage handler to understand potential concurrency
rg -n "webSocketMessage\|addEventListener.*message" cloud/quick-silver/cf/qs-room.ts -B3 -A10Repository: fireproof-storage/fireproof
Length of output: 53
🏁 Script executed:
# Check the full context of how the method is invoked
sed -n '1,130p' cloud/quick-silver/cf/qs-room.ts | tail -60Repository: fireproof-storage/fireproof
Length of output: 2054
Replace LRUMap.getSet with KeyedResolvOnce or Map<string, Promise<WebSocket>> to prevent duplicate store connections under concurrent access.
LRUMap.getSet does not deduplicate concurrent async factory calls — it calls get, then await createFN, then set. Two concurrent getStoreWs calls for the same db will both execute the factory and open separate store WebSocket connections, each installing its own message listener. This causes duplicate message handling and routes responses incorrectly when multiple clients request the same database simultaneously.
Use KeyedResolvOnce<WebSocket> (as employed elsewhere in the codebase for deduplication) or Map<string, Promise<WebSocket>> to cache the in-flight connection promise and ensure all concurrent callers await the same work.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/quick-silver/cf/qs-room.ts` around lines 82 - 102, The current use of
this.stores.getSet lets concurrent callers open duplicate store WebSocket
connections; replace the getSet usage with a deduping in-flight cache
(KeyedResolvOnce<WebSocket> or a Map<string, Promise<WebSocket>>) so all
concurrent getStoreWs(db) callers await the same Promise. Move the async factory
that calls this.env.QS_DB_STORE.get(id).fetch(...) and creates/storeWs into the
cached Promise, keep the storeWs.accept() and storeWs.addEventListener(...)
logic inside that single factory, and ensure failures remove the key from the
cache so retries can occur; keep calls to this.dispatchSubscribeEvent(db, msg)
and clientWs.send(...) unchanged.
QuickSilver is a brand new database implementation that focuses on simplicity.
It's not built to serve as a foundation for future implementation with content integrity and encryption.
But it will simulate the existing API and enable multi-device/user sync.
Summary by CodeRabbit
New Features
Tests
Chores
Style/API