Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a staged, model‑first SQL contract DSL and a semantic contract IR; composes authoring helpers from target/extension packs; instantiates authoring-aware enum/pgvector named types; strengthens storage-semantic validation (duplicate constraint/signature detection); and threads named-type resolution through schema verification and Postgres migration planning. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant DSL as Staged DSL (model/field/rel)
participant Helpers as Composed Authoring Helpers
participant Semantic as Semantic Builder
participant ContractIR as ContractIR
participant Planner as Postgres Planner
Dev->>DSL: declare models/fields/relations
DSL->>Helpers: request instantiation (type constructors / field presets)
Helpers-->>DSL: return instantiated storage descriptors
DSL->>Semantic: emit SqlSemanticModelDefinition
Semantic->>ContractIR: buildSqlContractFromSemanticDefinition()
ContractIR->>Planner: plan migrations (uses storageTypes/typeRef)
Planner-->>ContractIR: SQL plan using resolved nativeType/typeParams
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
@prisma-next/runtime-executor
@prisma-next/mongo-core
@prisma-next/mongo-orm
@prisma-next/mongo-runtime
@prisma-next/sql-runtime
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/target-mongo
@prisma-next/adapter-mongo
@prisma-next/driver-mongo
@prisma-next/contract-authoring
@prisma-next/contract-ts
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/psl-printer
@prisma-next/cli
@prisma-next/emitter
@prisma-next/migration-tools
@prisma-next/vite-plugin-contract-emit
@prisma-next/mongo-emitter
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/family-sql
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-builder
@prisma-next/target-postgres
@prisma-next/adapter-postgres
@prisma-next/driver-postgres
@prisma-next/core-control-plane
@prisma-next/core-execution-plane
@prisma-next/config
@prisma-next/contract
@prisma-next/operations
@prisma-next/plan
@prisma-next/utils
commit: |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
packages/2-sql/2-authoring/contract-ts/README.md (1)
18-20: Keep the README focused on the supported API, not rollout language.Calling one surface “legacy” here and the refined DSL the “long-term direction” / “first slice” below makes a durable package README read like migration notes. Please describe the currently supported surfaces directly and move roadmap phrasing out of this file.
As per coding guidelines,
packages/**/*.md: Update design docs to reflect the current state only, not the transition history or legacy approaches.Also applies to: 248-249
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/README.md` around lines 18 - 20, The README currently labels one API as “legacy” and frames the other as a “long-term direction”; update the document to describe only the currently supported surfaces (e.g., the two forms of the SQL contract builder: defineContract and the object-literal DSL using model('User', { fields, relations }).attributes(...).sql(...)) without rollout/migration language, remove or relocate any roadmap/transition phrasing (including the text around lines referencing the same issue such as the content at locations 248-249) to a separate design/roadmap doc, and ensure the README concisely documents usage and examples for the supported APIs only.test/integration/test/contract-builder.types.test-d.ts (1)
106-190: Keep this.test-d.tscase type-only instead of building the contract at runtime.This block is asserting refined token/ref preservation through
defineContract(...), which means the test is going through the builder’s widened runtime return type instead of a stable type-only fixture. That can hide exactly the regression this file is supposed to catch. Please move this scenario to a.test.tsfile or rewrite it around a type-only fixture/emitted contract type.Based on learnings, in Prisma-next, for type-level tests (files ending in
.test-d.ts), avoid using runtime builders likedefineContractsince they can widen types during type-level evaluation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/contract-builder.types.test-d.ts` around lines 106 - 190, This test file is type-only (.test-d.ts) but uses runtime builders (defineContract, validateContract, createTestContext, createStubAdapter, schema, sql) which widens types and defeats the purpose of the test; replace the runtime contract construction with a purely type-level fixture (e.g., declare a type/const type-only shape that mirrors the intended contract and use that type in ResultType/type assertions) or move the scenario into a runtime test (.test.ts) if you need actual builder behavior—in short, stop calling defineContract/validateContract/createTestContext in this .test-d.ts and instead reference a static emitted contract type (or move the code to a .test.ts) so the refined token/ref preservation assertions remain type-only.test/integration/test/contract-builder.test.ts (1)
276-340: Split the refined-surface coverage into its own test file.
test/integration/test/contract-builder.test.tsis already well past the 500-line cap, and this adds another distinct concern to an oversized integration file. Moving this case into something likecontract-builder.refined.test.tswould keep failures easier to localize and align with the repo’s test-file limits.As per coding guidelines,
**/*.test.ts: Keep test files under 500 lines to maintain readability and navigability. Split test files when they exceed 500 lines or contain multiple distinct concerns that can be logically separated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/contract-builder.test.ts` around lines 276 - 340, Extract the "refined object contract works with schema() and sql()" it-block into a new test file (e.g., contract-builder.refined.test.ts): copy the entire it(...) including setup of User, Post, contract (defineContract), adapter/context creation (createStubAdapter, createTestContext), and assertions using schema and sql; update imports at the top of the new file to include model, field, rel, int4Column, textColumn, timestamptzColumn, postgresPack, defineContract, createStubAdapter, createTestContext, schema, sql, SelectAst, ResultType, and expectTypeOf; remove the copied it-block from contract-builder.test.ts so the original file stays under 500 lines; run tests to ensure the new file is detected by the test runner and that no shared test fixtures are left behind.packages/2-sql/2-authoring/contract-ts/src/refined-option-a.ts (1)
372-462: UseifDefined()for these conditional properties.This helper block repeats the inline
...(cond ? { ... } : {})pattern forname,using,config, and FK options. The repo already standardizes onifDefined()for this case, so it would be good to switch these helpers over while the DSL surface is still settling. As per coding guidelines, "UseifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/refined-option-a.ts` around lines 372 - 462, Replace the inline conditional spread patterns in the constraint helper factories with the ifDefined() helper: in id, unique, index and foreignKey (functions named id, unique, index, foreignKey) remove occurrences of ...(options?.name ? { name: options.name } : {}) and the other inline spreads for using, config, onDelete, onUpdate, constraint, index and instead use ifDefined(...) from `@prisma-next/utils/defined` to conditionally add those properties (e.g., pass the option value and a mapper to produce the property object); keep normalizeFieldRefInput and normalizeTargetFieldRefInput usage intact and only change the conditional spreads to ifDefined calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts`:
- Around line 1186-1192: The code calls resolveModelIdConstraint(...) then
unconditionally passes the fields to next.primaryKey(...), which can produce
nullable PK columns if any participating model fields are optional; update the
logic in contract-builder.ts to validate the resolved idConstraint.fields
against the model field metadata (using spec.fields or whatever field
descriptors are available on spec) before calling primaryKey: detect any
optional/nullable fields referenced by idConstraint.fields and either (a) throw
or emit a clear error mentioning spec.modelName and the offending field names,
or (b) strip/convert them according to project rules so that primaryKey is only
invoked with non-nullable fields; ensure you reference resolveModelIdConstraint,
mapFieldNamesToColumnNames, primaryKey, spec.modelName and spec.fieldToColumn
when implementing the check so the fix is applied at the correct spot.
- Around line 1122-1139: The code builds tableName via applyNaming and
fieldToColumn for each model but never rejects duplicate storage names; add
checks before modelSpecs.set to validate (1) tableName is unique across
already-processed models in modelSpecs (reject with an Error referencing
modelName and the colliding tableName), and (2) the computed fieldToColumn
values are unique within this model (detect duplicate column names produced by
modelDefinition.stageOne.fields and throw an Error naming the model and
duplicate column). Use the existing symbols (applyNaming, tableName,
fieldToColumn, modelDefinition.stageOne.fields, modelSpecs) to locate the logic
and return/throw on collision so later lowering cannot silently overwrite
storage names.
In `@packages/2-sql/2-authoring/contract-ts/src/refined-option-a.ts`:
- Around line 981-983: The current isRefinedContractInput guard only checks for
a 'target' key; tighten it by verifying that value is an object with a non-null
object "target" and that target has the expected pack-ref shape (e.g.,
properties "kind", "familyId", and "targetId" exist and are strings); update
isRefinedContractInput to return true only when these checks pass so malformed
inputs like { target: 'postgres' } or incomplete contracts are rejected
immediately before dispatching.
In
`@packages/2-sql/2-authoring/contract-ts/test/contract-builder.refined-option-a.test.ts`:
- Around line 379-388: The test currently wraps ts-expect-error checks inside an
if (false) block which triggers lint errors; remove that constant-condition
block and move the failing assertions into a non-executed TypeScript-only
context: either place the three offending calls (rel.belongsTo(User, { from:
'userId', to: 'posts' }); rel.hasMany(Post, { by: 'posts' }); and the two valid
calls for context if needed) inside an uncalled helper function (e.g., function
_tsExpectErrors() { /* keep `@ts-expect-error` lines here */ } ) or move them into
a companion .test-d.ts file so the `@ts-expect-error` comments are checked by TS
without producing runtime/lintrule failures; target the calls to rel.belongsTo
and rel.hasMany and the types User and Post when relocating the lines.
---
Nitpick comments:
In `@packages/2-sql/2-authoring/contract-ts/README.md`:
- Around line 18-20: The README currently labels one API as “legacy” and frames
the other as a “long-term direction”; update the document to describe only the
currently supported surfaces (e.g., the two forms of the SQL contract builder:
defineContract and the object-literal DSL using model('User', { fields,
relations }).attributes(...).sql(...)) without rollout/migration language,
remove or relocate any roadmap/transition phrasing (including the text around
lines referencing the same issue such as the content at locations 248-249) to a
separate design/roadmap doc, and ensure the README concisely documents usage and
examples for the supported APIs only.
In `@packages/2-sql/2-authoring/contract-ts/src/refined-option-a.ts`:
- Around line 372-462: Replace the inline conditional spread patterns in the
constraint helper factories with the ifDefined() helper: in id, unique, index
and foreignKey (functions named id, unique, index, foreignKey) remove
occurrences of ...(options?.name ? { name: options.name } : {}) and the other
inline spreads for using, config, onDelete, onUpdate, constraint, index and
instead use ifDefined(...) from `@prisma-next/utils/defined` to conditionally add
those properties (e.g., pass the option value and a mapper to produce the
property object); keep normalizeFieldRefInput and normalizeTargetFieldRefInput
usage intact and only change the conditional spreads to ifDefined calls.
In `@test/integration/test/contract-builder.test.ts`:
- Around line 276-340: Extract the "refined object contract works with schema()
and sql()" it-block into a new test file (e.g.,
contract-builder.refined.test.ts): copy the entire it(...) including setup of
User, Post, contract (defineContract), adapter/context creation
(createStubAdapter, createTestContext), and assertions using schema and sql;
update imports at the top of the new file to include model, field, rel,
int4Column, textColumn, timestamptzColumn, postgresPack, defineContract,
createStubAdapter, createTestContext, schema, sql, SelectAst, ResultType, and
expectTypeOf; remove the copied it-block from contract-builder.test.ts so the
original file stays under 500 lines; run tests to ensure the new file is
detected by the test runner and that no shared test fixtures are left behind.
In `@test/integration/test/contract-builder.types.test-d.ts`:
- Around line 106-190: This test file is type-only (.test-d.ts) but uses runtime
builders (defineContract, validateContract, createTestContext,
createStubAdapter, schema, sql) which widens types and defeats the purpose of
the test; replace the runtime contract construction with a purely type-level
fixture (e.g., declare a type/const type-only shape that mirrors the intended
contract and use that type in ResultType/type assertions) or move the scenario
into a runtime test (.test.ts) if you need actual builder behavior—in short,
stop calling defineContract/validateContract/createTestContext in this
.test-d.ts and instead reference a static emitted contract type (or move the
code to a .test.ts) so the refined token/ref preservation assertions remain
type-only.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 02933a6b-59b4-4e30-9940-57b835b604e0
⛔ Files ignored due to path filters (4)
projects/ts-contract-authoring-redesign/authoring-api-options-recommendation.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/plan.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/spec.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/typed-cross-model-refs-before-and-after.mdis excluded by!projects/**
📒 Files selected for processing (10)
examples/prisma-next-demo/prisma/contract.tsexamples/prisma-next-demo/src/prisma/contract.d.tsexamples/prisma-next-demo/src/prisma/contract.jsonpackages/2-sql/2-authoring/contract-ts/README.mdpackages/2-sql/2-authoring/contract-ts/src/contract-builder.tspackages/2-sql/2-authoring/contract-ts/src/exports/contract-builder.tspackages/2-sql/2-authoring/contract-ts/src/refined-option-a.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.refined-option-a.test.tstest/integration/test/contract-builder.test.tstest/integration/test/contract-builder.types.test-d.ts
packages/2-sql/2-authoring/contract-ts/test/contract-builder.refined-option-a.test.ts
Outdated
Show resolved
Hide resolved
packages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
examples/prisma-next-demo/prisma-next.config.ts (1)
14-23: Extract the shared descriptor list once.
authoringContributionsandpslContributionsare built from the same pack set. Keeping two parallel arrays here makes it easy for the TS authoring path and the PSL interpretation path to drift.♻️ Proposed refactor
const extensionPacks = [pgvector]; -const authoringContributions = assembleAuthoringContributions([ - postgres, - postgresAdapter, - ...extensionPacks, -]); -const pslContributions = assemblePslInterpretationContributions([ - postgres, - postgresAdapter, - ...extensionPacks, -]); +const sqlControlDescriptors = [postgres, postgresAdapter, ...extensionPacks] as const; +const authoringContributions = assembleAuthoringContributions(sqlControlDescriptors); +const pslContributions = assemblePslInterpretationContributions(sqlControlDescriptors);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/prisma-next-demo/prisma-next.config.ts` around lines 14 - 23, authoringContributions and pslContributions are built from the same pack set; extract the shared descriptor list into a single constant (e.g., const sharedPacks = [postgres, postgresAdapter, ...extensionPacks]) and then call assembleAuthoringContributions(sharedPacks) and assemblePslInterpretationContributions(sharedPacks) to avoid drift between the TS authoring path and the PSL interpretation path and to centralize the pack list used by both functions.packages/1-framework/1-core/shared/contract/src/framework-components.ts (1)
541-553: UseifDefined()for these optional fields.This adds more inline conditional spreads. The repo already standardizes on
ifDefined()for this pattern, which keeps object construction consistent and easier to scan.As per coding guidelines, "Use `ifDefined()` from `@prisma-next/utils/defined` for conditional object spreads instead of inline conditional spread patterns."♻️ Proposed refactor
return { descriptor: resolveAuthoringStorageTypeTemplate(descriptor.output, args), nullable: descriptor.output.nullable ?? false, - ...(descriptor.output.default === undefined - ? {} - : { - default: resolveAuthoringColumnDefaultTemplate(descriptor.output.default, args), - }), - ...(descriptor.output.executionDefault === undefined - ? {} - : { - executionDefault: resolveAuthoringTemplateValue(descriptor.output.executionDefault, args), - }), + ...ifDefined( + 'default', + descriptor.output.default === undefined + ? undefined + : resolveAuthoringColumnDefaultTemplate(descriptor.output.default, args), + ), + ...ifDefined( + 'executionDefault', + descriptor.output.executionDefault === undefined + ? undefined + : resolveAuthoringTemplateValue(descriptor.output.executionDefault, args), + ), id: descriptor.output.id ?? false, unique: descriptor.output.unique ?? false, };import { ifDefined } from '@prisma-next/utils/defined';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/shared/contract/src/framework-components.ts` around lines 541 - 553, Replace the inline conditional spreads with the repo-standard ifDefined() helper: import ifDefined from '@prisma-next/utils/defined' and change the object construction that currently uses the inline ternary spreads around default and executionDefault (and nullable) so they use ifDefined(...) to only include those keys when present; specifically update the return that calls resolveAuthoringStorageTypeTemplate, resolveAuthoringColumnDefaultTemplate, and resolveAuthoringTemplateValue so nullable, default, and executionDefault are passed through ifDefined() instead of the ?: spread pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/1-framework/1-core/shared/contract/src/framework-components.ts`:
- Around line 468-477: The code is silently coercing bad resolutions (nativeType
and typeParams) into strings, hiding template bugs; update the resolution logic
in the function that uses resolveAuthoringTemplateValue so that when nativeType
is not a string you throw a descriptive error (include template.codecId or other
identifying info) instead of using String(nativeType), and likewise validate
that typeParams (resolved via resolveAuthoringTemplateValue) is either undefined
or a plain object/Record<string, unknown> and throw if it isn’t; apply the same
strict validation for the other occurrence around lines 500–504 to fail fast and
surface the invalid descriptor.
- Around line 346-351: The property-walking loop that reads template.path uses
the `in` operator (in the for...of loop over template.path) which inspects the
prototype chain; change the check inside the loop to use Object.hasOwn(value,
segment) (or Object.prototype.hasOwnProperty.call(value, segment) if older
runtimes are a concern) so only own properties are considered, keeping the same
early-fallback behavior (set value = undefined and break) when the property is
missing or value is not a plain object.
---
Nitpick comments:
In `@examples/prisma-next-demo/prisma-next.config.ts`:
- Around line 14-23: authoringContributions and pslContributions are built from
the same pack set; extract the shared descriptor list into a single constant
(e.g., const sharedPacks = [postgres, postgresAdapter, ...extensionPacks]) and
then call assembleAuthoringContributions(sharedPacks) and
assemblePslInterpretationContributions(sharedPacks) to avoid drift between the
TS authoring path and the PSL interpretation path and to centralize the pack
list used by both functions.
In `@packages/1-framework/1-core/shared/contract/src/framework-components.ts`:
- Around line 541-553: Replace the inline conditional spreads with the
repo-standard ifDefined() helper: import ifDefined from
'@prisma-next/utils/defined' and change the object construction that currently
uses the inline ternary spreads around default and executionDefault (and
nullable) so they use ifDefined(...) to only include those keys when present;
specifically update the return that calls resolveAuthoringStorageTypeTemplate,
resolveAuthoringColumnDefaultTemplate, and resolveAuthoringTemplateValue so
nullable, default, and executionDefault are passed through ifDefined() instead
of the ?: spread pattern.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: f2d813eb-96cc-4328-92fb-64e1b224b036
📒 Files selected for processing (3)
examples/prisma-next-demo/prisma-next.config.tspackages/1-framework/1-core/shared/contract/src/exports/framework-components.tspackages/1-framework/1-core/shared/contract/src/framework-components.ts
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/integration/test/authoring/parity/pgvector-named-type/contract.ts (1)
15-15: Optional: deduplicate'Embedding1536'into a single constant.Line 15 and Line 20 repeat the same key; extracting a constant avoids accidental key drift.
♻️ Proposed refactor
+const EMBEDDING_1536_TYPE_NAME = 'Embedding1536' as const; + const embedding1536Column = { codecId: 'pg/vector@1', nativeType: 'vector', - typeRef: 'Embedding1536', + typeRef: EMBEDDING_1536_TYPE_NAME, } as const; export const contract = defineContract<CodecTypes>() .target(postgresPack) - .storageType('Embedding1536', embedding1536Type) + .storageType(EMBEDDING_1536_TYPE_NAME, embedding1536Type)Also applies to: 20-20
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/authoring/parity/pgvector-named-type/contract.ts` at line 15, Extract the repeated literal 'Embedding1536' into a single constant and use that constant for both occurrences of typeRef to avoid drift; e.g., add a const EMBEDDING_TYPE = 'Embedding1536' near the top of the module and replace the inline 'Embedding1536' values in the objects where typeRef is set (the occurrences currently using typeRef: 'Embedding1536') with typeRef: EMBEDDING_TYPE.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/integration/test/authoring/parity/pgvector-named-type/contract.ts`:
- Line 15: Extract the repeated literal 'Embedding1536' into a single constant
and use that constant for both occurrences of typeRef to avoid drift; e.g., add
a const EMBEDDING_TYPE = 'Embedding1536' near the top of the module and replace
the inline 'Embedding1536' values in the objects where typeRef is set (the
occurrences currently using typeRef: 'Embedding1536') with typeRef:
EMBEDDING_TYPE.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 4cfe1d23-4a7b-4ebb-8e52-3a348de041e3
📒 Files selected for processing (2)
test/integration/test/authoring/parity/pgvector-named-type/contract.tstest/integration/test/authoring/parity/pgvector-named-type/expected.contract.json
✅ Files skipped from review due to trivial changes (1)
- test/integration/test/authoring/parity/pgvector-named-type/expected.contract.json
71487d1 to
97dbf88
Compare
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/2-sql/2-authoring/contract-psl/src/interpreter.ts (1)
1466-1585:⚠️ Potential issue | 🔴 CriticalReplace
builder.storageType()call withstorageTypesaccumulation to match the pattern used in other branches.The
dbNativeTypeAttributebranch at line 1567 references an undefinedbuildervariable that doesn't exist in the function scope. The pgvector and default branches both correctly accumulate storage types viastorageTypes[declaration.name] = {...}, but this branch still uses the old builder pattern. This will cause a compilation error and prevent named@db.*types from being registered.Fix
- builder = builder.storageType(declaration.name, { - codecId: descriptor.codecId, - nativeType: descriptor.nativeType, - typeParams: descriptor.typeParams ?? {}, - }); + storageTypes[declaration.name] = { + codecId: descriptor.codecId, + nativeType: descriptor.nativeType, + typeParams: descriptor.typeParams ?? {}, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts` around lines 1466 - 1585, The db-native-type branch is still calling builder.storageType (undefined) which breaks compilation; instead, after resolveDbNativeTypeAttribute returns descriptor, set storageTypes[declaration.name] = { codecId: descriptor.codecId, nativeType: descriptor.nativeType, typeParams: descriptor.typeParams ?? {} } (matching the pgvector/default branches), keep namedTypeDescriptors.set(declaration.name, { ...descriptor, typeRef: declaration.name }), and remove the builder.storageType call so named `@db`.* types are registered via storageTypes.
🧹 Nitpick comments (6)
packages/2-sql/2-authoring/contract-ts/test/helpers/column-descriptor.ts (1)
9-13: UseifDefined()for conditional object spread.The inline conditional spread pattern should be replaced with
ifDefined()for consistency. As per coding guidelines: "UseifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns."♻️ Suggested refactor
+import { ifDefined } from '@prisma-next/utils/defined'; import type { ColumnTypeDescriptor } from '@prisma-next/contract-authoring';return { codecId, nativeType: derived, - ...(typeParams ? { typeParams } : {}), + ...ifDefined('typeParams', typeParams), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/test/helpers/column-descriptor.ts` around lines 9 - 13, Replace the inline conditional spread for typeParams in the returned object with the ifDefined helper: import ifDefined from '@prisma-next/utils/defined' (or use existing import) and replace ...(typeParams ? { typeParams } : {}) with ...ifDefined({ typeParams }) in the return object within the function that constructs the column descriptor (the object containing codecId and nativeType: derived) so the conditional spread follows the project's ifDefined pattern.packages/2-sql/3-tooling/family/test/mutation-default-assembly.test.ts (1)
356-384: Simplify prototype pollution test structure.The try/finally wrapper around
expect().toThrow()is overly defensive. IfassembleAuthoringContributionscorrectly rejects the__proto__key, no pollution occurs. The cleanup is only needed if the test fails (i.e., the code doesn't throw and pollution happens). Consider restructuring for clarity:✏️ Suggested refactor
it('rejects dangerous authoring helper path segments', () => { const maliciousFieldNamespace = JSON.parse(` { "__proto__": { "polluted": { "kind": "fieldPreset", "output": { "codecId": "conflict/text@1", "nativeType": "text" } } } } `); const descriptor = createDescriptor('malicious', { authoring: { field: maliciousFieldNamespace, }, }); - try { - expect(() => assembleAuthoringContributions([descriptor])).toThrow( - /Invalid authoring field helper "__proto__"/, - ); - } finally { - delete (Object.prototype as Record<string, unknown>)['polluted']; - } + expect(() => assembleAuthoringContributions([descriptor])).toThrow( + /Invalid authoring field helper "__proto__"/, + ); + + // Defensive cleanup in case the validation failed to reject the payload + delete (Object.prototype as Record<string, unknown>)['polluted']; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/3-tooling/family/test/mutation-default-assembly.test.ts` around lines 356 - 384, The test currently wraps expect(() => assembleAuthoringContributions([descriptor])).toThrow(...) in a try/finally to remove a possible Object.prototype['polluted'] key; simplify by removing the try/finally and calling expect(() => assembleAuthoringContributions([descriptor])).toThrow(/Invalid authoring field helper "__proto__"/) directly in the it block, and add a global cleanup (e.g., an afterEach) that deletes (Object.prototype as Record<string, unknown>)['polluted'] to guarantee teardown if the test ever fails; update references to assembleAuthoringContributions and createDescriptor accordingly.packages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.ts (1)
249-254: Document the terseness threshold rationale.The 1.6x line count threshold is a useful regression guard, but its rationale isn't documented. Consider adding a brief comment explaining why this threshold was chosen.
✏️ Add explanatory comment
it('keeps the staged contract DSL within the terseness threshold for the same contract', () => { const pslLines = countSemanticLines(representativePslSchema); const tsLines = countSemanticLines(representativeTsAuthoring); + // The staged DSL should not exceed 1.6x the PSL line count for equivalent contracts. + // This threshold balances TypeScript's structural overhead with authoring ergonomics. expect(tsLines).toBeLessThanOrEqual(Math.ceil(pslLines * 1.6)); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.ts` around lines 249 - 254, The test comparing semantic line counts uses a 1.6x threshold but lacks an explanation; add a concise inline comment above the test (or above the expect line) explaining the rationale for 1.6 — e.g., that TypeScript authoring adds roughly up to 60% extra lines for typing, boilerplate, and readability versus the compact PSL DSL, and that 1.6 was chosen empirically as a permissive regression guard; reference the test name ("keeps the staged contract DSL within the terseness threshold for the same contract") and the identifiers countSemanticLines, representativePslSchema, and representativeTsAuthoring so readers know which values the comment pertains to.packages/2-sql/2-authoring/contract-ts/README.md (1)
186-193: Consider varying sentence structure for readability.Lines 186-193 contain multiple bullet points that start with the same pattern ("use..." / "prefer..."). While technically correct, varying the sentence structure would improve readability.
✏️ Example rewording
-- use the callback overload when you want target- and extension-composed `type.*` and pack-owned `field.*` helper autocomplete inside `contract.ts` -- use named model tokens plus `User.refs.id` or `User.ref('id')` for cross-model foreign-key targets +- the callback overload provides target- and extension-composed `type.*` and pack-owned `field.*` helper autocomplete inside `contract.ts` +- named model tokens (`User.refs.id` or `User.ref('id')`) enable typed cross-model foreign-key targets🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/README.md` around lines 186 - 193, Reword the bulleted lines to vary sentence openings while preserving technical meaning and references: avoid repeating initial verbs like "use" and "prefer" by alternating phrasing (e.g., start some bullets with the behavior/result, a short contextual clause, or an example), keep exact identifiers intact (field.namedType('Role'), types.Role, PN_CONTRACT_TYPED_FALLBACK_AVAILABLE, field.namedType(types.Role), User.refs.id / User.ref('id'), constraints.ref('Model','field'), rel.belongsTo('User', ...), rel.hasMany('Post', ...), rel.manyToMany('Tag', { through: 'PostTag', ... }), .id(), .attributes(({ fields, constraints }) => ({ ... })), .unique(), constraints.unique([...])) and ensure each bullet still clearly states the preferred pattern, the fallback alternative, and the builder warning where applicable.packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.ts (1)
68-701: Split this helper suite before it grows further.This file is already well past 500 lines and combines helper lowering, warning behavior, callback integration, and pack-collision/security cases. Please split it into smaller files by behavior area so failures stay local and the suite remains navigable.
As per coding guidelines, "Keep test files under 500 lines... Split test files when they exceed 500 lines... Group tests by logical boundaries."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.ts` around lines 68 - 701, The test file describe('staged contract DSL helper vocabulary') is over 500 lines and should be split into smaller, focused test files; extract groups of related tests (helper lowering and id/gen behavior covering field helpers and model lowering using symbols like AuditEntry, ShortLink, and functions field.id.uuidv4 / field.nanoid; type and named-type behavior using types and model User; warning behaviors around process.emitWarning and helpers like field.namedType, rel.belongsTo, and constraints.ref; callback integration using defineContract callbacks; and extension/pack collision/security cases including conflictingPack and maliciousPack) into separate files, moving each logical group into its own test module, update the test suite to import any shared fixtures (postgresTargetPack, pgvectorExtensionPack, model, field, rel, defineContract, constraints helpers) so each new file is self-contained, and run the test runner to ensure no inter-file dependencies remain (fix any top-level shared state like spies by isolating or restoring them per-file).packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts (1)
20-582: Split this test file by concern.It is already over the 500-line cap and mixes runtime lowering, naming/error cases, and token/type-surface assertions. Breaking it into focused
contract-builder.staged-contract-dsl.{basic,errors,types}.test.ts-style files will make failures much easier to navigate.As per coding guidelines, "Keep test files under 500 lines... Split test files when they exceed 500 lines... Use descriptive file names with the pattern
{base}.{category}.test.ts."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts` around lines 20 - 582, The test file describe('staged contract DSL authoring surface', ...) is over 500 lines and mixes multiple concerns; split it into focused test files (e.g., contract-builder.staged-contract-dsl.basic.test.ts, .errors.test.ts, .types.test.ts). Move runtime lowering and storage-focused assertions (the first long it() that checks app_user/blog_post storage, execution mutators, and relations) into the "basic" file; move naming and validation/error cases (it blocks like "rejects duplicate named storage objects...", "rejects duplicate table/column names after applying naming defaults", and the reject/throw assertions) into the "errors" file; move token/type-surface and TypeScript-only checks (it blocks like "types local refs and named model tokens separately" and "requires a named model token before cross-model refs are available") into the "types" file. Preserve imports/constants used in the tests (e.g., model, field, rel, defineContract, postgresTargetPack, textColumn, int4Column, timestamptzColumn, constraints/cols usages and variables like User/Post/PostTag/Tag) and keep each new file under 500 lines with descriptive names following the pattern {base}.{category}.test.ts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/1-framework/1-core/shared/contract/src/framework-components.ts`:
- Around line 283-295: The AuthoringFieldPresetOutput currently exposes
executionDefault as a raw AuthoringTemplateValue and
instantiateAuthoringFieldPreset forwards the resolved value as unknown; update
instantiateAuthoringFieldPreset (and similar logic around lines noted) to
validate and coerce the resolved executionDefault into a well-formed
ExecutionMutationDefaultValue before returning or exposing it: add runtime
checks that the resolved object matches the expected shape (types/required
fields) and throw a clear error if validation fails (or return a typed
fallback), or alternatively tighten the descriptor by changing
AuthoringFieldPresetOutput.executionDefault to a stricter type that enforces the
ExecutionMutationDefaultValue contract so typos in pack metadata are caught
early; reference AuthoringFieldPresetOutput, executionDefault,
instantiateAuthoringFieldPreset, and ExecutionMutationDefaultValue when making
the changes.
In
`@packages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.ts`:
- Line 59: Add a short inline rationale comment immediately above the
intentional double-cast "as unknown as AuthoringTypeConstructorDescriptor"
explaining that the double-cast is used only in tests to coerce a partial/mock
object into the descriptor interface for test setup and is intentionally
bypassing strict typing; keep it concise (one sentence) and mention it is
test-only to satisfy the coding guideline.
In `@packages/2-sql/1-core/contract/test/validators.test.ts`:
- Around line 648-670: The fk factory is being called with the wrong argument
shape: replace the object-form call currently passing fk([...], { table: 'org',
columns: ['id'] }, {...}) with the positional signature expected by the fk
factory: fk(columns, refTable, refColumns, opts?). Update both duplicate
foreign-key entries in the test (within the storage/table definition) to use
fk(['orgId'], 'org', ['id'], { onDelete: 'cascade' }) so the
validateStorageSemantics and duplicate-FK assertion work correctly.
In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts`:
- Around line 143-163: The function buildComposedExtensionPackRefs is
fabricating ExtensionPackRef objects with a hardcoded version ('0.0.1');
instead, thread the real composed pack refs through this code path so actual
versions are preserved: change the signature of buildComposedExtensionPackRefs
(or its caller) to accept a lookup of real ExtensionPackRef<'sql','postgres'>
objects (e.g., composedPackRefs: Record<string,
ExtensionPackRef<'sql','postgres'>> or similar) and return those entries for the
extensionIds rather than constructing new objects from target/extensionId;
update callers that invoke buildComposedExtensionPackRefs to pass the real
composed refs so emitted contracts reflect true pack metadata.
- Around line 1701-1764: The loop currently treats @@id as unsupported; add a
branch like the @@unique/@@index handling for when modelAttribute.name === 'id'
that: calls parseAttributeFieldList({ attribute: modelAttribute, ... }), then
mapFieldNamesToColumns(...), then parseConstraintMapArgument(...), and finally
record the result as the model primary key (e.g., set a primaryKey variable or
push to the existing primary-key container) using { columns: columnNames,
...ifDefined('name', constraintName) } so composite primary keys are lowered to
id.columns instead of emitting PSL_UNSUPPORTED_MODEL_ATTRIBUTE. Use the same
referenced helpers (parseAttributeFieldList, mapFieldNamesToColumns,
parseConstraintMapArgument, ifDefined, modelAttribute.span, model.name,
sourceId, diagnostics) to mirror the @@unique/@@index flow.
In `@packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts`:
- Around line 1285-1395: The resolveSemanticRelationNode function currently
builds join column arrays without checking that the left/right field lists have
matching arity; add explicit validation in resolveSemanticRelationNode before
calling mapFieldNamesToColumnNames: for belongsTo compare
normalizeRelationFieldNames(relation.from).length vs
normalizeRelationFieldNames(relation.to).length; for hasMany/hasOne compare
resolveRelationAnchorFields(currentSpec).length vs
normalizeRelationFieldNames(relation.by).length; and for the through (N:M) path
validate that resolveRelationAnchorFields(currentSpec).length,
normalizeRelationFieldNames(relation.from).length and
normalizeRelationFieldNames(relation.to).length are consistent as appropriate;
if counts mismatch throw a clear Error mentioning
`${currentSpec.modelName}.${relationName}` so the caller can fail fast instead
of producing mismatched parentColumns/childColumns.
- Around line 64-83: Missing import: add ForeignKeyConstraint to the staged DSL
import list so the symbol used later (ForeignKeyConstraint) is resolved; update
the import block that currently pulls types like StagedContractInput,
StagedModelBuilder, RelationState, etc. from './staged-contract-dsl' to also
include ForeignKeyConstraint, then run TS build to confirm the missing
identifier error for ForeignKeyConstraint is resolved.
In `@packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts`:
- Around line 221-234: Loop over model.fields currently prefers executionDefault
and silently drops field.default; update the logic to fail-fast when both are
present: inside the loop that iterates model.fields (the SqlSemanticFieldNode
entries), detect when field.executionDefault and field.default are both truthy
and throw a clear validation error (or return a rejected Result) referencing the
field (e.g., columnName) instead of falling through to next.generated; otherwise
keep the existing branches that call next.generated(...) for executionDefault
and next.column(...) for the normal case. Ensure the error message names the
offending field and the conflicting properties.
In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts`:
- Around line 179-182: The concrete builder classes (ScalarFieldBuilder,
RelationBuilder, StagedModelBuilder) should be made private implementation
details rather than exported types: remove the export modifiers from these
classes and ensure the public API only exposes the factory functions and
interfaces (e.g., field, rel, model and any public interfaces/types they
return). Update any external exports or typings to reference the
factory-returned interfaces instead of the concrete classes so callers use the
factories (field/rel/model) and interfaces, not the class constructors or
identities; apply the same change to the other builder class declarations
referenced in the file (the other Scalar/Relation/StagedModel class declarations
noted in the comment).
- Around line 1519-1542: The runtime applyNaming function's uppercase boundary
logic currently treats digits and '_' as "lower" which diverges from the
compile-time ApplyNamingType/SnakeCase logic; update applyNaming (in
staged-contract-dsl.ts) so the "lower" check only returns true for ASCII letters
a–z (e.g. use an isAsciiLower check like char >= 'a' && char <= 'z' or a /[a-z]/
test) when computing prevIsLower and nextIsLower, so names like field1Id become
field1id at runtime to match the inferred types in contract-builder.ts.
In `@test/integration/test/contract-builder.types.test-d.ts`:
- Around line 134-218: The test is using runtime builders (defineContract,
validateContract, createStubAdapter, createTestContext, schema) inside a
.test-d.ts type-only file which widens types and hides DSL regressions; replace
those runtime calls with type-only fixtures or extracted types: remove
defineContract/validateContract/createTestContext/createStubAdapter/schema/sql
usage and instead import or declare a pre-validated contract/type alias (e.g. a
fixture like ValidatedContractFixture or a type-only alias such as type
Validated = typeof validatedFixture) and use that for the expectTypeOf checks
and ResultType assertions so the test remains purely type-level and does not
build runtime contracts. Ensure references to symbols used in assertions
(validated.storage.tables, validated.models.User, User.refs, Post,
rel.belongsTo/hasMany checks) point to the type-only fixture or extracted types
rather than to values produced by runtime builders.
---
Outside diff comments:
In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts`:
- Around line 1466-1585: The db-native-type branch is still calling
builder.storageType (undefined) which breaks compilation; instead, after
resolveDbNativeTypeAttribute returns descriptor, set
storageTypes[declaration.name] = { codecId: descriptor.codecId, nativeType:
descriptor.nativeType, typeParams: descriptor.typeParams ?? {} } (matching the
pgvector/default branches), keep namedTypeDescriptors.set(declaration.name, {
...descriptor, typeRef: declaration.name }), and remove the builder.storageType
call so named `@db`.* types are registered via storageTypes.
---
Nitpick comments:
In `@packages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.ts`:
- Around line 249-254: The test comparing semantic line counts uses a 1.6x
threshold but lacks an explanation; add a concise inline comment above the test
(or above the expect line) explaining the rationale for 1.6 — e.g., that
TypeScript authoring adds roughly up to 60% extra lines for typing, boilerplate,
and readability versus the compact PSL DSL, and that 1.6 was chosen empirically
as a permissive regression guard; reference the test name ("keeps the staged
contract DSL within the terseness threshold for the same contract") and the
identifiers countSemanticLines, representativePslSchema, and
representativeTsAuthoring so readers know which values the comment pertains to.
In `@packages/2-sql/2-authoring/contract-ts/README.md`:
- Around line 186-193: Reword the bulleted lines to vary sentence openings while
preserving technical meaning and references: avoid repeating initial verbs like
"use" and "prefer" by alternating phrasing (e.g., start some bullets with the
behavior/result, a short contextual clause, or an example), keep exact
identifiers intact (field.namedType('Role'), types.Role,
PN_CONTRACT_TYPED_FALLBACK_AVAILABLE, field.namedType(types.Role), User.refs.id
/ User.ref('id'), constraints.ref('Model','field'), rel.belongsTo('User', ...),
rel.hasMany('Post', ...), rel.manyToMany('Tag', { through: 'PostTag', ... }),
.id(), .attributes(({ fields, constraints }) => ({ ... })), .unique(),
constraints.unique([...])) and ensure each bullet still clearly states the
preferred pattern, the fallback alternative, and the builder warning where
applicable.
In
`@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.ts`:
- Around line 68-701: The test file describe('staged contract DSL helper
vocabulary') is over 500 lines and should be split into smaller, focused test
files; extract groups of related tests (helper lowering and id/gen behavior
covering field helpers and model lowering using symbols like AuditEntry,
ShortLink, and functions field.id.uuidv4 / field.nanoid; type and named-type
behavior using types and model User; warning behaviors around
process.emitWarning and helpers like field.namedType, rel.belongsTo, and
constraints.ref; callback integration using defineContract callbacks; and
extension/pack collision/security cases including conflictingPack and
maliciousPack) into separate files, moving each logical group into its own test
module, update the test suite to import any shared fixtures (postgresTargetPack,
pgvectorExtensionPack, model, field, rel, defineContract, constraints helpers)
so each new file is self-contained, and run the test runner to ensure no
inter-file dependencies remain (fix any top-level shared state like spies by
isolating or restoring them per-file).
In
`@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts`:
- Around line 20-582: The test file describe('staged contract DSL authoring
surface', ...) is over 500 lines and mixes multiple concerns; split it into
focused test files (e.g., contract-builder.staged-contract-dsl.basic.test.ts,
.errors.test.ts, .types.test.ts). Move runtime lowering and storage-focused
assertions (the first long it() that checks app_user/blog_post storage,
execution mutators, and relations) into the "basic" file; move naming and
validation/error cases (it blocks like "rejects duplicate named storage
objects...", "rejects duplicate table/column names after applying naming
defaults", and the reject/throw assertions) into the "errors" file; move
token/type-surface and TypeScript-only checks (it blocks like "types local refs
and named model tokens separately" and "requires a named model token before
cross-model refs are available") into the "types" file. Preserve
imports/constants used in the tests (e.g., model, field, rel, defineContract,
postgresTargetPack, textColumn, int4Column, timestamptzColumn, constraints/cols
usages and variables like User/Post/PostTag/Tag) and keep each new file under
500 lines with descriptive names following the pattern
{base}.{category}.test.ts.
In `@packages/2-sql/2-authoring/contract-ts/test/helpers/column-descriptor.ts`:
- Around line 9-13: Replace the inline conditional spread for typeParams in the
returned object with the ifDefined helper: import ifDefined from
'@prisma-next/utils/defined' (or use existing import) and replace ...(typeParams
? { typeParams } : {}) with ...ifDefined({ typeParams }) in the return object
within the function that constructs the column descriptor (the object containing
codecId and nativeType: derived) so the conditional spread follows the project's
ifDefined pattern.
In `@packages/2-sql/3-tooling/family/test/mutation-default-assembly.test.ts`:
- Around line 356-384: The test currently wraps expect(() =>
assembleAuthoringContributions([descriptor])).toThrow(...) in a try/finally to
remove a possible Object.prototype['polluted'] key; simplify by removing the
try/finally and calling expect(() =>
assembleAuthoringContributions([descriptor])).toThrow(/Invalid authoring field
helper "__proto__"/) directly in the it block, and add a global cleanup (e.g.,
an afterEach) that deletes (Object.prototype as Record<string,
unknown>)['polluted'] to guarantee teardown if the test ever fails; update
references to assembleAuthoringContributions and createDescriptor accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: ccd9c720-1dc9-40df-89cf-886cfd5f2a04
⛔ Files ignored due to path filters (10)
projects/ts-contract-authoring-redesign/.gitignoreis excluded by!projects/**projects/ts-contract-authoring-redesign/authoring-api-options-recommendation.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/contract-before-and-after.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/contract.after.tsis excluded by!projects/**projects/ts-contract-authoring-redesign/contract.before.tsis excluded by!projects/**projects/ts-contract-authoring-redesign/plan.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/plans/gap-closure-plan.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/spec.mdis excluded by!projects/**projects/ts-contract-authoring-redesign/tsconfig.jsonis excluded by!projects/**projects/ts-contract-authoring-redesign/typed-cross-model-refs-before-and-after.mdis excluded by!projects/**
📒 Files selected for processing (57)
examples/prisma-next-demo/prisma-next.config.tsexamples/prisma-next-demo/prisma/contract.tsexamples/prisma-next-demo/src/prisma/contract.d.tsexamples/prisma-next-demo/src/prisma/contract.jsonpackages/1-framework/1-core/shared/contract/src/exports/framework-components.tspackages/1-framework/1-core/shared/contract/src/framework-components.tspackages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.tspackages/2-sql/1-core/contract/package.jsonpackages/2-sql/1-core/contract/src/authoring.tspackages/2-sql/1-core/contract/src/exports/authoring.tspackages/2-sql/1-core/contract/src/index.tspackages/2-sql/1-core/contract/src/validators.tspackages/2-sql/1-core/contract/test/validators.test.tspackages/2-sql/1-core/contract/tsdown.config.tspackages/2-sql/2-authoring/contract-psl/src/interpreter.tspackages/2-sql/2-authoring/contract-psl/src/provider.tspackages/2-sql/2-authoring/contract-psl/test/interpreter.test.tspackages/2-sql/2-authoring/contract-psl/test/provider.test.tspackages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.tspackages/2-sql/2-authoring/contract-ts/README.mdpackages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.tspackages/2-sql/2-authoring/contract-ts/src/contract-builder.tspackages/2-sql/2-authoring/contract-ts/src/contract.tspackages/2-sql/2-authoring/contract-ts/src/exports/contract-builder.tspackages/2-sql/2-authoring/contract-ts/src/semantic-contract.tspackages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.constraints.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.semantic-contract.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.parity.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.portability.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.tspackages/2-sql/2-authoring/contract-ts/test/contract.integration.test.tspackages/2-sql/2-authoring/contract-ts/test/helpers/column-descriptor.tspackages/2-sql/3-tooling/family/src/core/assembly.tspackages/2-sql/3-tooling/family/src/exports/control.tspackages/2-sql/3-tooling/family/test/mutation-default-assembly.test.tspackages/2-sql/4-lanes/relational-core/src/ast/sql-codecs.tspackages/2-sql/4-lanes/relational-core/test/ast/sql-codecs.test.tspackages/3-extensions/pgvector/src/core/authoring.tspackages/3-extensions/pgvector/src/core/descriptor-meta.tspackages/3-extensions/pgvector/src/exports/pack.tspackages/3-targets/3-targets/postgres/src/core/authoring.tspackages/3-targets/3-targets/postgres/src/core/descriptor-meta.tspackages/3-targets/3-targets/postgres/src/exports/pack.tspackages/3-targets/6-adapters/postgres/src/core/codec-ids.tspackages/3-targets/6-adapters/postgres/src/core/codecs.tspackages/3-targets/6-adapters/postgres/src/core/descriptor-meta.tspackages/3-targets/6-adapters/postgres/test/codecs.test.tspackages/3-targets/6-adapters/postgres/test/descriptor-meta.test.tstest/integration/test/authoring/parity/pgvector-named-type/contract.tstest/integration/test/authoring/parity/pgvector-named-type/expected.contract.jsontest/integration/test/authoring/psl.pgvector-dbinit.test.tstest/integration/test/authoring/templates/prisma-next.config.parity-psl.tstest/integration/test/contract-builder.test.tstest/integration/test/contract-builder.types.test-d.tstest/integration/test/fixtures/cli/cli-integration-test-app/fixtures/emit-command/prisma-next.config.parity-psl.ts
✅ Files skipped from review due to trivial changes (9)
- packages/2-sql/1-core/contract/package.json
- packages/2-sql/1-core/contract/src/exports/authoring.ts
- packages/2-sql/1-core/contract/src/index.ts
- packages/2-sql/1-core/contract/tsdown.config.ts
- packages/2-sql/2-authoring/contract-ts/test/contract.integration.test.ts
- test/integration/test/authoring/parity/pgvector-named-type/expected.contract.json
- packages/3-targets/6-adapters/postgres/test/descriptor-meta.test.ts
- examples/prisma-next-demo/src/prisma/contract.d.ts
- examples/prisma-next-demo/prisma/contract.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- test/integration/test/authoring/parity/pgvector-named-type/contract.ts
- examples/prisma-next-demo/prisma-next.config.ts
- examples/prisma-next-demo/src/prisma/contract.json
- packages/2-sql/2-authoring/contract-ts/src/exports/contract-builder.ts
- packages/1-framework/1-core/shared/contract/src/exports/framework-components.ts
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
packages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.ts
Show resolved
Hide resolved
packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts
Outdated
Show resolved
Hide resolved
packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (7)
packages/2-sql/2-authoring/contract-psl/src/interpreter.ts (2)
154-174:⚠️ Potential issue | 🟠 MajorPreserve the real composed extension-pack refs here.
This helper reconstructs every composed extension with
version: '0.0.1', so lowered contracts stop reflecting the actual composed pack metadata. Any version-sensitive hashing, diffing, or compatibility checks will drift from the real composition state unless the realExtensionPackRefs are threaded through this path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts` around lines 154 - 174, The helper buildComposedExtensionPackRefs is reconstructing composed pack refs with a hardcoded version '0.0.1' which loses real metadata; change it to accept and return the actual ExtensionPackRef objects (e.g. replace extensionIds: readonly string[] with extensionRefs: readonly ExtensionPackRef<'sql','postgres'>[]) and build the Record by mapping each extensionRef.id to the extensionRef itself (preserving extensionRef.version, familyId, targetId, etc.) instead of fabricating a new object, and keep the return type Record<string, ExtensionPackRef<'sql','postgres'>> | undefined and the early undefined when the input array is empty.
1717-1780:⚠️ Potential issue | 🟠 MajorHandle
@@idbefore the generic unsupported-attribute fallback.The loop only recognizes
@@uniqueand@@index, so a valid PSL composite primary key still falls through toPSL_UNSUPPORTED_MODEL_ATTRIBUTE. That breaks lowering for models that use@@id([...])and also drops any mapped PK name on that path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts` around lines 1717 - 1780, The loop currently handles @@unique and @@index but not @@id, causing composite primary-key attributes to hit the unsupported-attribute fallback; add a branch for when modelAttribute.name === 'id' (the @@id case) before the pgvector/unsupported checks that mirrors the unique/index handling: use parseAttributeFieldList({ attribute: modelAttribute, ... }) to get fieldNames, call mapFieldNamesToColumns({ modelName: model.name, fieldNames, mapping, ... }) to get columnNames, call parseConstraintMapArgument({ attribute: modelAttribute, ... }) to get a mapped constraint name, then record the primary key information (e.g., set or push a primaryKey/primaryKeyNode structure containing columns: columnNames and ...ifDefined('name', constraintName)) and continue; ensure you reference the same sourceId, diagnostics and span values used in the unique/index branches.test/integration/test/contract-builder.types.test-d.ts (1)
134-227:⚠️ Potential issue | 🟠 MajorKeep these
.test-d.tsadditions type-only.These new cases still instantiate contracts, adapters, schema contexts, and SQL plans. In a
.test-d.tsfile that widens the very types you're trying to lock down and can turn staged-DSL regressions into false positives. Please move these assertions onto fixture-derived or extracted type aliases instead of runtime values here.Based on learnings, "In Prisma-next, for type-level tests (files ending in .test-d.ts), avoid using runtime builders like defineContract since they can widen types during type-level evaluation."
Also applies to: 229-438
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/contract-builder.types.test-d.ts` around lines 134 - 227, This test uses runtime builders (defineContract, createStubAdapter, createTestContext, validateContract, sql, schema) inside a .test-d.ts which widens types—replace those runtime constructions with type-only fixtures or extracted type aliases: remove instantiations of contract/validated/adapter/context/_plan and instead import or declare the equivalent compile-time types (e.g., a FixtureValidatedContract type, FixtureUserTable/RefinedUserColumns, and a SqlQueryPlan<Row> type alias) and move all expectTypeOf checks to operate on those type aliases (reference symbols: defineContract, validateContract, createStubAdapter, createTestContext, sql, schema, User, Post, _plan). Ensure no runtime values are created in this file; keep it purely type-level.packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts (1)
220-234:⚠️ Potential issue | 🟠 MajorReject conflicting
defaultandexecutionDefaultdeclarations.This concern was raised in a previous review but appears unaddressed. When both
field.defaultandfield.executionDefaultare present, the code silently prefersexecutionDefaultand discardsdefault. This silent precedence can lead to surprising behavior.🐛 Proposed fix
function appendSemanticFields( tableBuilder: SqlSemanticTableBuilderLike, model: SqlSemanticModelNode, ): SqlSemanticTableBuilderLike { let next = tableBuilder; for (const field of model.fields) { + if (field.default !== undefined && field.executionDefault !== undefined) { + throw new Error( + `Field "${model.modelName}.${field.fieldName}" cannot define both default and executionDefault`, + ); + } if (field.executionDefault) { next = next.generated(field.columnName, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts` around lines 220 - 234, The loop over model.fields silently prefers executionDefault over default; detect when both field.executionDefault and field.default are present and reject the model by throwing a clear error instead of dropping one silently. In the for loop that builds columns (the block using model.fields, field.executionDefault, next.generated and next.column), add a guard at the start of each iteration that checks if both field.executionDefault and field.default are truthy and throw a descriptive Error (including field.columnName and the model identifier) so callers know the conflicting declaration and can fix the schema.packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts (3)
182-182: 🛠️ Refactor suggestion | 🟠 MajorKeep builder classes as private implementation details.
ScalarFieldBuilderis exported as a public class. Per coding guidelines, the public API should expose only the factory functions (field.column,field.generated, etc.) and interfaces, keeping the class implementation private.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` at line 182, ScalarFieldBuilder is currently exported but should be an internal implementation; remove the public export from the class declaration (make the class non-exported/private) and keep the public API via the existing factory functions (e.g., field.column, field.generated) and any public interfaces/types; update any places that import ScalarFieldBuilder externally to instead depend on the public factory functions or exported interfaces (or re-export only needed types, not the class), and run TypeScript checks to ensure no external consumers require the concrete class.
629-629: 🛠️ Refactor suggestion | 🟠 Major
RelationBuilderclass should also be private.Same concern as
ScalarFieldBuilder- the class is exported but should be an implementation detail with onlyrel.*factory functions exposed publicly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` at line 629, The RelationBuilder class is exported but should be an internal implementation detail like ScalarFieldBuilder; remove the public export from the RelationBuilder declaration (make it non-exported/private) so only the rel.* factory functions remain the public API, then update any external references or type usages that relied on the exported RelationBuilder name (replace them with the public factory return types or export a narrow type alias if absolutely needed) and run the build/tests to ensure no remaining external consumers depend on the exported class.
1081-1087: 🛠️ Refactor suggestion | 🟠 Major
StagedModelBuilderclass should also be private.Same concern as the other builder classes - this should be a private implementation detail with only the
model()factory function exposed publicly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` around lines 1081 - 1087, Remove the public export from the StagedModelBuilder class so it becomes an internal/private implementation detail: change "export class StagedModelBuilder" to "class StagedModelBuilder" and keep the public API surface as the existing model() factory function; update any re-exports or index files that currently re-export StagedModelBuilder so they only expose model(), and ensure any call sites import via the model() factory rather than referencing StagedModelBuilder directly (adjust type exports if you need to keep a public type alias from the factory instead of exporting the class).
🧹 Nitpick comments (8)
packages/3-targets/3-targets/postgres/test/migrations/planner.behavior.test.ts (2)
263-291: Move this storage-type case out ofplanner.behavior.test.ts.This file is already well past the 500-line cap, and this scenario fits naturally beside the other storage-type planner coverage in
planner.storage-types.test.tsor a new split file.As per coding guidelines, "Keep test files under 500 lines to maintain readability and navigability. If a test file exceeds this limit, it should be split into multiple files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/test/migrations/planner.behavior.test.ts` around lines 263 - 291, The test case titled "uses codec hook temporary defaults for parameterized pgvector storage type refs" should be moved out of planner.behavior.test.ts into planner.storage-types.test.ts (or a new test file) because the current file exceeds the 500-line guideline; locate the test by the it(...) block and the use of planAddColumn and pgvectorDescriptor, cut that entire it-block and paste it alongside other storage-type planner tests, ensuring imports (planAddColumn, pgvectorDescriptor, qualifiedUserTable) are available or added to the target file so the test runs unchanged.
642-649: UseifDefined()for the optionaltypesspread.The inline conditional spread reintroduces a pattern the repo is explicitly standardizing away in TS helpers and tests.
As per coding guidelines, "Use `ifDefined()` from `@prisma-next/utils/defined` for conditional object spreads instead of inline conditional spread patterns."♻️ Proposed cleanup
+import { ifDefined } from '@prisma-next/utils/defined'; ... const contract = createTestContract({ storage: { tables: { ...(options?.extraContractTables ?? {}), user: userTable, }, - ...(options?.extraStorageTypes ? { types: options.extraStorageTypes } : {}), + ...ifDefined('types', options?.extraStorageTypes), }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/test/migrations/planner.behavior.test.ts` around lines 642 - 649, The object construction in createTestContract uses an inline conditional spread for storage types ("...(options?.extraStorageTypes ? { types: options.extraStorageTypes } : {})"); replace that pattern with the helper ifDefined from `@prisma-next/utils/defined` so the storage object spreads types only when defined: import and call ifDefined(options?.extraStorageTypes, v => ({ types: v })) (or equivalent usage) inside the storage object to guard the conditional spread while referencing createTestContract and the storage.types field.packages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.ts (1)
194-200: Add atypeRefreconciliation regression for thisALTER TYPEpath.The new tests in this PR cover schema verification,
CREATE TABLE, andADD COLUMNtemp-default flows, but not the destructive reconciliation branch wired here. A focused planner test for atypeRef-backed parameterized type would help catchbuildColumnTypeSql()andbuildExpectedFormatType()drifting out of sync.Also applies to: 529-576
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.ts` around lines 194 - 200, Add a regression test that exercises the ALTER TYPE reconciliation path for a parameterized type backed by a typeRef so the planner branch that returns buildAlterColumnTypeOperation is covered; specifically create a planner-level test that defines a typeRef-based parameterized column, triggers a destructive ALTER TYPE reconciliation (the code path in planner-reconciliation.ts that returns buildAlterColumnTypeOperation), and asserts the produced SQL by buildColumnTypeSql matches the expected format from buildExpectedFormatType. Target the functions/builders buildAlterColumnTypeOperation, buildColumnTypeSql, and buildExpectedFormatType and ensure the test fails if their outputs drift, thereby protecting the ALTER TYPE path for typeRef-backed parameterized types.packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql.ts (1)
144-164: Silent fallback and code duplication withverify-sql-schema.ts.Two concerns with this function:
Silent fallback: Same issue as in
verify-sql-schema.ts— whentypeRefpoints to a non-existent type, it silently falls back to the column's original metadata. This could produce incorrect SQL (e.g.,"vector"instead ofvector(1536)as shown in the relevant test context snippet).Code duplication: This function is identical to
resolveContractColumnTypeMetadatainverify-sql-schema.ts. Consider extracting to a shared utility in@prisma-next/sql-contractto follow DRY principles.Suggested approach
Extract the shared resolution logic to
@prisma-next/sql-contract/typesor a dedicated utility module:// In `@prisma-next/sql-contract` (shared) export function resolveColumnTypeMetadata( column: StorageColumn, storageTypes: Record<string, StorageTypeInstance>, ): Pick<StorageColumn, 'codecId' | 'nativeType' | 'typeParams'> { if (!column.typeRef) { return column; } const referencedType = storageTypes[column.typeRef]; if (!referencedType) { throw new Error( `Column references storage type "${column.typeRef}" but it is not defined in storage.types.`, ); } return { codecId: referencedType.codecId, nativeType: referencedType.nativeType, typeParams: referencedType.typeParams, }; }Then import and use it in both
verify-sql-schema.tsandplanner-sql.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql.ts` around lines 144 - 164, The current resolveColumnTypeMetadata in planner-sql.ts silently falls back when column.typeRef points to a missing storage type and duplicates logic in verify-sql-schema.ts (resolveContractColumnTypeMetadata); extract this logic into a shared utility (e.g., export function resolveColumnTypeMetadata in `@prisma-next/sql-contract/types`) that throws a descriptive Error when a referenced storage type is not found, then replace both the local resolveColumnTypeMetadata in planner-sql.ts and resolveContractColumnTypeMetadata in verify-sql-schema.ts with imports of the shared function and update their callers/imports accordingly so missing typeRefs fail loudly and duplication is removed.test/integration/test/contract-builder.test.ts (1)
276-345: Move this refined-object integration case into its own test file.
test/integration/test/contract-builder.test.tsis already well past the 500-line limit, and this adds another distinct concern on top of the existing builder/runtime coverage. A focused file for the refined-object path will keep the suite easier to navigate and maintain.As per coding guidelines, "Keep test files under 500 lines to maintain readability and navigability. If a test file exceeds this limit, it should be split into multiple files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/contract-builder.test.ts` around lines 276 - 345, Extract the "refined object contract works with schema() and sql()" test into a new test file (e.g., copy the it(...) block) so contract-builder.test.ts stays under 500 lines; move all dependent setup in that block (models User and Post definitions, defineContract(postgresPack, storageHash:'sha256:test-refined'), createStubAdapter, createTestContext, and usages of schema(...) and sql(...)) into the new file and update imports/exports as needed so the new file imports the same helpers and runs the test in isolation.packages/2-sql/2-authoring/contract-psl/test/interpreter.test.ts (1)
924-1009: Break this authoring-contributions coverage out ofinterpreter.test.ts.This file is already far beyond the 500-line cap, and the new enum/pgvector authoring path is a separate feature area. Moving it into a focused file will make the interpreter coverage much easier to navigate.
As per coding guidelines, "Keep test files under 500 lines to maintain readability and navigability. If a test file exceeds this limit, it should be split into multiple files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-psl/test/interpreter.test.ts` around lines 924 - 1009, Move the new authoring-contributions test out of the oversized interpreter.test.ts into its own focused test file: take the it('instantiates enum and pgvector descriptors from shared authoring contributions', ...) block that calls interpretPslDocumentToSqlContractIR with authoringContributions (the enum and pgvector entries) and place it in a new test file dedicated to authoring contributions/extension-pack behavior; update imports so the test still references parsePslDocument and interpretPslDocumentToSqlContractIR and ensure the expectations against result.value.storage (types Role and Embedding1536 and table columns role/embedding) remain unchanged.packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts (1)
229-234: Consider usingifDefined()for conditional spreads.The inline conditional spread patterns could be simplified using
ifDefined()from@prisma-next/utils/definedfor consistency with codebase conventions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts` around lines 229 - 234, Replace the inline conditional spreads in the next.column call with ifDefined from `@prisma-next/utils/defined` to follow project convention: import ifDefined and use it to conditionally add nullable and default properties based on field.nullable and field.default instead of the current ...(field.nullable ? ...) and ...(field.default ? ...) patterns so the options object passed to next.column remains concise and consistent with the codebase.packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts (1)
1-593: File exceeds 500 line guideline.At 593 lines, this file exceeds the 500-line limit from coding guidelines. Consider splitting by logical boundaries:
- Core DSL tests (inline constraints, defaults, naming)
- Relation tests (belongsTo, hasMany, manyToMany)
- Type/token tests (refs, cross-model refs, token matching)
The test coverage itself is comprehensive and well-structured with meaningful assertions that verify behavior rather than restating input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts` around lines 1 - 593, This file is >500 lines; split the monolithic test into three smaller test files to follow the guideline: 1) "core DSL" tests (move cases that validate inline ids/uniques, attribute/id/unique overlays, naming defaults and compound ids — e.g. the it blocks titled "lowers inline ids...", "keeps field and belongsTo storage overrides...", "rejects field-local id and unique overlays...", "supports compound ids..." and tests referencing Membership, BlogPost, storageTables, contract.storage), 2) "relation" tests (move relation-focused cases: many-to-many, staged .relations, duplicate named storage objects, duplicate named storage overlays — tests referencing PostTag, Post, Tag, User, Post relations, constraints.foreignKey, constraints.index), and 3) "type/token" tests (move type/ref/token checks and anonymous model checks — tests referencing expectTypeOf, User.refs, Anonymous and the last "rejects mismatched model token keys..." case). For each new file, keep the same imports (defineContract, model, field, rel, columnDescriptor, postgresTargetPack, and type helpers), ensure shared fixtures/constants (postgresTargetPack, int4Column, textColumn, timestamptzColumn) are either duplicated or extracted into a small shared test-helper module, and update test-file names and imports accordingly so all it blocks retain identical assertions and setup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/2-sql/3-tooling/family/src/core/schema-verify/verify-sql-schema.ts`:
- Around line 963-981: The function resolveContractColumnTypeMetadata currently
silently returns the original column when contractColumn.typeRef exists but
storageTypes[contractColumn.typeRef] is missing; update
resolveContractColumnTypeMetadata to throw a clear error (including the missing
contractColumn.typeRef and identifying column info such as contractColumn.name
or similar field) instead of returning silently, so callers will fail fast on
invalid type references; ensure the thrown error message names the missing
typeRef and the column to aid debugging.
In `@test/integration/test/contract-builder.types.test-d.ts`:
- Around line 319-326: Replace deprecated toMatchTypeOf() usages with the
appropriate Vitest type-matchers: for the Post.buildSqlSpec() and
ShortLink.buildSqlSpec() assertions, change toMatchTypeOf() to toExtend() to
assert the returned type extends the union-shaped spec (keep the same expected
union shape); for the _testRow type check, change toMatchTypeOf() to
toEqualTypeOf() to assert an exact type match. Locate the three assertions by
the unique symbols Post.buildSqlSpec(), ShortLink.buildSqlSpec(), and _testRow
and update their matcher calls accordingly.
---
Duplicate comments:
In `@packages/2-sql/2-authoring/contract-psl/src/interpreter.ts`:
- Around line 154-174: The helper buildComposedExtensionPackRefs is
reconstructing composed pack refs with a hardcoded version '0.0.1' which loses
real metadata; change it to accept and return the actual ExtensionPackRef
objects (e.g. replace extensionIds: readonly string[] with extensionRefs:
readonly ExtensionPackRef<'sql','postgres'>[]) and build the Record by mapping
each extensionRef.id to the extensionRef itself (preserving
extensionRef.version, familyId, targetId, etc.) instead of fabricating a new
object, and keep the return type Record<string,
ExtensionPackRef<'sql','postgres'>> | undefined and the early undefined when the
input array is empty.
- Around line 1717-1780: The loop currently handles @@unique and @@index but not
@@id, causing composite primary-key attributes to hit the unsupported-attribute
fallback; add a branch for when modelAttribute.name === 'id' (the @@id case)
before the pgvector/unsupported checks that mirrors the unique/index handling:
use parseAttributeFieldList({ attribute: modelAttribute, ... }) to get
fieldNames, call mapFieldNamesToColumns({ modelName: model.name, fieldNames,
mapping, ... }) to get columnNames, call parseConstraintMapArgument({ attribute:
modelAttribute, ... }) to get a mapped constraint name, then record the primary
key information (e.g., set or push a primaryKey/primaryKeyNode structure
containing columns: columnNames and ...ifDefined('name', constraintName)) and
continue; ensure you reference the same sourceId, diagnostics and span values
used in the unique/index branches.
In `@packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts`:
- Around line 220-234: The loop over model.fields silently prefers
executionDefault over default; detect when both field.executionDefault and
field.default are present and reject the model by throwing a clear error instead
of dropping one silently. In the for loop that builds columns (the block using
model.fields, field.executionDefault, next.generated and next.column), add a
guard at the start of each iteration that checks if both field.executionDefault
and field.default are truthy and throw a descriptive Error (including
field.columnName and the model identifier) so callers know the conflicting
declaration and can fix the schema.
In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts`:
- Line 182: ScalarFieldBuilder is currently exported but should be an internal
implementation; remove the public export from the class declaration (make the
class non-exported/private) and keep the public API via the existing factory
functions (e.g., field.column, field.generated) and any public interfaces/types;
update any places that import ScalarFieldBuilder externally to instead depend on
the public factory functions or exported interfaces (or re-export only needed
types, not the class), and run TypeScript checks to ensure no external consumers
require the concrete class.
- Line 629: The RelationBuilder class is exported but should be an internal
implementation detail like ScalarFieldBuilder; remove the public export from the
RelationBuilder declaration (make it non-exported/private) so only the rel.*
factory functions remain the public API, then update any external references or
type usages that relied on the exported RelationBuilder name (replace them with
the public factory return types or export a narrow type alias if absolutely
needed) and run the build/tests to ensure no remaining external consumers depend
on the exported class.
- Around line 1081-1087: Remove the public export from the StagedModelBuilder
class so it becomes an internal/private implementation detail: change "export
class StagedModelBuilder" to "class StagedModelBuilder" and keep the public API
surface as the existing model() factory function; update any re-exports or index
files that currently re-export StagedModelBuilder so they only expose model(),
and ensure any call sites import via the model() factory rather than referencing
StagedModelBuilder directly (adjust type exports if you need to keep a public
type alias from the factory instead of exporting the class).
In `@test/integration/test/contract-builder.types.test-d.ts`:
- Around line 134-227: This test uses runtime builders (defineContract,
createStubAdapter, createTestContext, validateContract, sql, schema) inside a
.test-d.ts which widens types—replace those runtime constructions with type-only
fixtures or extracted type aliases: remove instantiations of
contract/validated/adapter/context/_plan and instead import or declare the
equivalent compile-time types (e.g., a FixtureValidatedContract type,
FixtureUserTable/RefinedUserColumns, and a SqlQueryPlan<Row> type alias) and
move all expectTypeOf checks to operate on those type aliases (reference
symbols: defineContract, validateContract, createStubAdapter, createTestContext,
sql, schema, User, Post, _plan). Ensure no runtime values are created in this
file; keep it purely type-level.
---
Nitpick comments:
In `@packages/2-sql/2-authoring/contract-psl/test/interpreter.test.ts`:
- Around line 924-1009: Move the new authoring-contributions test out of the
oversized interpreter.test.ts into its own focused test file: take the
it('instantiates enum and pgvector descriptors from shared authoring
contributions', ...) block that calls interpretPslDocumentToSqlContractIR with
authoringContributions (the enum and pgvector entries) and place it in a new
test file dedicated to authoring contributions/extension-pack behavior; update
imports so the test still references parsePslDocument and
interpretPslDocumentToSqlContractIR and ensure the expectations against
result.value.storage (types Role and Embedding1536 and table columns
role/embedding) remain unchanged.
In `@packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts`:
- Around line 229-234: Replace the inline conditional spreads in the next.column
call with ifDefined from `@prisma-next/utils/defined` to follow project
convention: import ifDefined and use it to conditionally add nullable and
default properties based on field.nullable and field.default instead of the
current ...(field.nullable ? ...) and ...(field.default ? ...) patterns so the
options object passed to next.column remains concise and consistent with the
codebase.
In
`@packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts`:
- Around line 1-593: This file is >500 lines; split the monolithic test into
three smaller test files to follow the guideline: 1) "core DSL" tests (move
cases that validate inline ids/uniques, attribute/id/unique overlays, naming
defaults and compound ids — e.g. the it blocks titled "lowers inline ids...",
"keeps field and belongsTo storage overrides...", "rejects field-local id and
unique overlays...", "supports compound ids..." and tests referencing
Membership, BlogPost, storageTables, contract.storage), 2) "relation" tests
(move relation-focused cases: many-to-many, staged .relations, duplicate named
storage objects, duplicate named storage overlays — tests referencing PostTag,
Post, Tag, User, Post relations, constraints.foreignKey, constraints.index), and
3) "type/token" tests (move type/ref/token checks and anonymous model checks —
tests referencing expectTypeOf, User.refs, Anonymous and the last "rejects
mismatched model token keys..." case). For each new file, keep the same imports
(defineContract, model, field, rel, columnDescriptor, postgresTargetPack, and
type helpers), ensure shared fixtures/constants (postgresTargetPack, int4Column,
textColumn, timestamptzColumn) are either duplicated or extracted into a small
shared test-helper module, and update test-file names and imports accordingly so
all it blocks retain identical assertions and setup.
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.ts`:
- Around line 194-200: Add a regression test that exercises the ALTER TYPE
reconciliation path for a parameterized type backed by a typeRef so the planner
branch that returns buildAlterColumnTypeOperation is covered; specifically
create a planner-level test that defines a typeRef-based parameterized column,
triggers a destructive ALTER TYPE reconciliation (the code path in
planner-reconciliation.ts that returns buildAlterColumnTypeOperation), and
asserts the produced SQL by buildColumnTypeSql matches the expected format from
buildExpectedFormatType. Target the functions/builders
buildAlterColumnTypeOperation, buildColumnTypeSql, and buildExpectedFormatType
and ensure the test fails if their outputs drift, thereby protecting the ALTER
TYPE path for typeRef-backed parameterized types.
In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql.ts`:
- Around line 144-164: The current resolveColumnTypeMetadata in planner-sql.ts
silently falls back when column.typeRef points to a missing storage type and
duplicates logic in verify-sql-schema.ts (resolveContractColumnTypeMetadata);
extract this logic into a shared utility (e.g., export function
resolveColumnTypeMetadata in `@prisma-next/sql-contract/types`) that throws a
descriptive Error when a referenced storage type is not found, then replace both
the local resolveColumnTypeMetadata in planner-sql.ts and
resolveContractColumnTypeMetadata in verify-sql-schema.ts with imports of the
shared function and update their callers/imports accordingly so missing typeRefs
fail loudly and duplication is removed.
In
`@packages/3-targets/3-targets/postgres/test/migrations/planner.behavior.test.ts`:
- Around line 263-291: The test case titled "uses codec hook temporary defaults
for parameterized pgvector storage type refs" should be moved out of
planner.behavior.test.ts into planner.storage-types.test.ts (or a new test file)
because the current file exceeds the 500-line guideline; locate the test by the
it(...) block and the use of planAddColumn and pgvectorDescriptor, cut that
entire it-block and paste it alongside other storage-type planner tests,
ensuring imports (planAddColumn, pgvectorDescriptor, qualifiedUserTable) are
available or added to the target file so the test runs unchanged.
- Around line 642-649: The object construction in createTestContract uses an
inline conditional spread for storage types ("...(options?.extraStorageTypes ? {
types: options.extraStorageTypes } : {})"); replace that pattern with the helper
ifDefined from `@prisma-next/utils/defined` so the storage object spreads types
only when defined: import and call ifDefined(options?.extraStorageTypes, v => ({
types: v })) (or equivalent usage) inside the storage object to guard the
conditional spread while referencing createTestContract and the storage.types
field.
In `@test/integration/test/contract-builder.test.ts`:
- Around line 276-345: Extract the "refined object contract works with schema()
and sql()" test into a new test file (e.g., copy the it(...) block) so
contract-builder.test.ts stays under 500 lines; move all dependent setup in that
block (models User and Post definitions, defineContract(postgresPack,
storageHash:'sha256:test-refined'), createStubAdapter, createTestContext, and
usages of schema(...) and sql(...)) into the new file and update imports/exports
as needed so the new file imports the same helpers and runs the test in
isolation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 952ac495-6cbc-49cf-b5d4-e7b674c8a86b
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (34)
examples/prisma-next-demo/src/queries/get-user-posts-no-emit.tsexamples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.tspackage.jsonpackages/1-framework/3-tooling/cli/src/load-ts-contract.tspackages/1-framework/3-tooling/cli/test/commands/migration-apply.test.tspackages/2-sql/1-core/contract/test/validators.test.tspackages/2-sql/2-authoring/contract-psl/package.jsonpackages/2-sql/2-authoring/contract-psl/src/interpreter.tspackages/2-sql/2-authoring/contract-psl/test/interpreter.test.tspackages/2-sql/2-authoring/contract-psl/test/provider.test.tspackages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.tspackages/2-sql/2-authoring/contract-ts/src/contract-builder.tspackages/2-sql/2-authoring/contract-ts/src/semantic-contract.tspackages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.semantic-contract.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.parity.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.portability.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.tspackages/2-sql/3-tooling/family/src/core/schema-verify/verify-sql-schema.tspackages/2-sql/3-tooling/family/test/schema-verify.basic.test.tspackages/2-sql/3-tooling/family/test/schema-verify.helpers.tspackages/2-sql/4-lanes/relational-core/src/ast/sql-codecs.tspackages/2-sql/4-lanes/sql-builder-new/package.jsonpackages/3-mongo-target/1-mongo-target/package.jsonpackages/3-targets/3-targets/postgres/src/core/migrations/planner-identity-values.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-recipes.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-sql.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner.tspackages/3-targets/3-targets/postgres/test/migrations/planner.behavior.test.tspackages/3-targets/3-targets/postgres/test/migrations/planner.storage-types.test.tstest/integration/test/contract-builder.test.tstest/integration/test/contract-builder.types.test-d.tstest/utils/src/timeouts.ts
✅ Files skipped from review due to trivial changes (6)
- packages/2-sql/4-lanes/sql-builder-new/package.json
- packages/2-sql/2-authoring/contract-psl/package.json
- test/utils/src/timeouts.ts
- packages/1-framework/3-tooling/cli/test/commands/migration-apply.test.ts
- packages/2-sql/1-core/contract/test/validators.test.ts
- package.json
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/2-sql/4-lanes/relational-core/src/ast/sql-codecs.ts
- packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.portability.test.ts
- packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.parity.test.ts
- packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts
packages/2-sql/3-tooling/family/src/core/schema-verify/verify-sql-schema.ts
Show resolved
Hide resolved
03c9746 to
4749486
Compare
wmadden
left a comment
There was a problem hiding this comment.
Code Review — PR #261
PR: feat(contract-ts): [DRAFT] design new contract.ts
Spec: projects/ts-contract-authoring-redesign/spec.md
Branch: feat/contract-ts-revamp → main
Files changed: 105 files, +14 210 / −984
Summary
This PR introduces a staged TypeScript contract authoring DSL (model → fields/relations → .attributes() → .sql()), a shared semantic intermediate representation (SqlSemanticContractDefinition), pack-driven field/type composition, and a lowering pipeline that produces the same canonical contract IR. The implementation is architecturally sound with strong test coverage, but has four blocking issues (no-emit flow regression, demo contract string-ref usage, any type aliases in tests, field presets bypassing pack composition) and several non-blocking concerns around module size, duplicated types, naming, documentation, and potential type performance regression.
What Looks Solid
-
Clean intermediate representation:
SqlSemanticContractDefinitionis a minimal, well-typed interface boundary between authoring and the existing builder. It cleanly decouples the staged surface fromSqlContractBuilderinternals and opens the door for alternative authoring surfaces. -
Pack-driven vocabulary: Field presets and type constructors are genuinely derived from pack descriptors rather than hand-maintained. The composition in
composed-authoring-helpers.tscorrectly merges target + extension namespaces with conflict detection and prototype-pollution guards. -
Thorough validation and error messages: The lowering pipeline validates identity conflicts, duplicate table/column mappings, missing FK targets, arity mismatches, and named constraint collisions — all with clear, actionable error messages.
-
TS ↔ PSL parity proof: The
ts-psl-parity.test.tsfixture is a strong design proof — it lowers equivalent TS and PSL contracts and asserts structural equality on the output. -
Fallback warning system:
staged-contract-warnings.tsemits diagnostics when authors use string-based refs where typed model tokens are available. The batching threshold keeps noise manageable. -
Type-level design: The
BuiltStagedContract<Definition>type computes storage tables, mappings, and column types from the definition's generic parameter, preserving full type inference for downstreamschema()/sql()usage without manual annotation.
Blocking Issues
F01 — Demo contract uses string-based namedType refs for relations
Location: examples/prisma-next-demo/prisma/contract.ts lines 17, 27
Issue: The demo contract — the primary showcase for how authors should use the new surface — uses field.namedType('user_type') (string ref, line 17) and field.namedType('Embedding1536') (string ref, line 27). Meanwhile, both types are declared in the same contract's types block (lines 61–62). The spec's design principle states that string refs are a fallback for cross-contract references; within the same contract, typed refs should be the default to demonstrate the design intent.
Additionally, the relation between User and Post uses typed model tokens correctly (rel.hasMany(Post, ...), rel.belongsTo(User, ...)), but the namedType calls do not pass the type instance — creating an inconsistency within the same demo file.
Suggestion: Pass the type instance directly:
const types = {
Embedding1536: vector(1536),
user_type: enumType('user_type', ['admin', 'user']),
} as const;
const User = model('User', {
fields: {
// ...
kind: field.namedType(types.user_type),
},
});
const Post = model('Post', {
fields: {
// ...
embedding: field.namedType(types.Embedding1536).optional(),
},
});Then reference types in defineContract({ ..., types }). This demonstrates the recommended typed path and silences the fallback warnings the surface itself emits.
F02 — any type alias in new test files
Location: packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts line 14, contract-builder.staged-contract-dsl.parity.test.ts line 12, contract-builder.staged-contract-dsl.portability.test.ts line 12
Issue: Three new test files define type AnyModel = StagedModelBuilder<any, any, any, any, any> with a biome-ignore suppression. The repo rule states "Never use any type" and these are not negative type tests — they are widened aliases to break circular inference in forward-reference patterns. While the biome-ignore comment documents the intent, any leaks through AnyModel into test code, weakening type assertions. When a test builds a contract from an AnyModel, the resulting contract type is partially any-infected, which means expectTypeOf checks could silently pass even if inference regresses.
Suggestion: Replace with a widened-but-safe type that uses never or unknown bounds:
type AnyModel = StagedModelBuilder<
string | undefined,
Record<string, ScalarFieldBuilder>,
Record<string, RelationBuilder<RelationState>>,
ModelAttributesSpec | undefined,
SqlStageSpec | undefined
>;This matches the RuntimeStagedModel type already used in production code (staged-contract-lowering.ts line 34) and avoids any leakage while still providing the widening needed for forward references.
F13 — Field presets bypass pack composition and are misplaced in the layer hierarchy
Location: packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts — field export; packages/2-sql/1-core/contract/src/authoring.ts — preset definitions
Issue: Two problems combine to violate the invariant that all field presets enter the system through framework composition.
Problem A — Presets hardcoded onto field: The field object exported from staged-contract-dsl.ts has portableSqlAuthoringFieldPresets spread directly at module initialization:
export const field = {
column: columnField,
generated: generatedField,
namedType: namedTypeField,
...portableFieldHelpers, // ← hardcoded, bypasses pack composition
};This means field.text, field.uuid, field.createdAt, etc. are available before any defineContract() call and outside any pack context. They skip the composition seam entirely. The bare field export should only contain structural helpers (column, generated, namedType); all presets should come through the composed helpers the factory callback receives.
Problem B — Presets live at the wrong layer: The concrete preset definitions live in packages/2-sql/1-core/contract/src/authoring.ts — layer 1 (core) of the SQL domain, importable by everything above. This package is doing double duty: providing type machinery (SqlStorage, StorageColumn, SqlContract, validation) and defining runtime configuration data (concrete field preset descriptors with specific codec IDs and native types). Because presets are at the bottom, they're gravitationally pulled into being directly imported by the authoring layer and re-exported by the Postgres target pack.
Suggestion: Extract the SQL family descriptor — including concrete preset definitions — to a high-layer SQL family package (e.g. packages/2-sql/9-family/). Layer 9 can import types from layers 1–8, but layers 1–8 cannot import from layer 9. This structurally prevents hardcoding: the authoring DSL (layer 2) physically cannot import presets. Presets enter the system exclusively through the family descriptor's authoring.field contribution, which framework composition merges with target and extension pack contributions. The core contract package (layer 1) returns to being pure type machinery. The Postgres target pack no longer re-exports family presets — the family descriptor already contributes them.
See wip/system-design-review-findings.md — Finding 2 for the full analysis and flow diagrams.
F16 — No-emit flow has regressed: requires emitted types and bracket access
Location: examples/prisma-next-demo/src/prisma-no-emit/context.ts — lines 6, 11, 21; examples/prisma-next-demo/src/queries/get-user-by-id-no-emit.ts — lines 5–9
Issue: Two symptoms combine to show the no-emit flow has degraded:
-
Imports emitted types: The no-emit context now imports
type { Contract } from '../prisma/contract.d'and passes it asvalidateContract<Contract>(contract). The entire point of no-emit is that authors don't need emitted artifacts — types should flow from the TS-authored contract directly. This import means the no-emit demo silently depends on the emit workflow having already run. -
Bracket access with null guards: Query files changed from
sql.user.select(...)toconst userTable = sql['user']; if (!userTable) throw .... The table names are no longer resolving as known keys on thesqlbuilder, forcing bracket access and runtime null checks. This is a significant ergonomic regression.
Both symptoms suggest that defineContract() in the staged path isn't producing types precise enough for downstream schema()/sql() inference without the emitted .d.ts as a crutch.
Suggestion: The no-emit path should work with typeof contract alone. If the staged DSL's return type (BuiltStagedContract<Definition>) already computes precise literal types (which the type tests suggest it does), the no-emit context should use those types directly rather than importing emitted Contract. Investigate why sql requires bracket access — the contract type may not be propagating table name literals through createExecutionContext → sqlBuilder.
Non-Blocking Concerns
F03 — contract-builder.ts is 1,883 lines and growing
Location: packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts — entire file
Issue: This single module contains the old chain builder (SqlContractBuilder, ~650 lines), the new staged type computation (BuiltStagedContract, StagedBuiltStorage, StagedBuiltMappings, ~550 lines of type-level machinery), the defineContract overloads (~100 lines), the SemanticContractBuilder protocol and buildSqlContractFromSemanticDefinition (~150 lines), and numerous supporting types. This makes the file hard to navigate, review, and maintain.
Suggestion: Extract the staged type computation (everything from StagedDefinitionModels through BuiltStagedContract) into a dedicated staged-contract-types.ts module. The SemanticContractBuilder protocol and buildSqlContractFromSemanticDefinition could live in the lowering module alongside buildStagedSemanticContractDefinition. This would leave contract-builder.ts focused on SqlContractBuilder and the defineContract overloads.
F04 — SemanticContractBuilder type erasure loses type-level safety
Location: packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts lines 851–928, 938
Issue: buildSqlContractFromSemanticDefinition defines a SemanticContractBuilder protocol type that erases all generics from SqlContractBuilder, then casts new SqlContractBuilder() as unknown as SemanticContractBuilder (line 938). The function drives the builder imperatively without any of the type-level tracking that SqlContractBuilder normally provides. If SqlContractBuilder's runtime behavior changes (e.g., a method starts relying on generic-dependent runtime logic), this path silently breaks. The comment on lines 848–849 documents the intent, which is good, but the protocol type itself is ~80 lines of duplicated method signatures.
Suggestion: Consider defining the SemanticContractBuilder protocol as a formal interface exported from the builder module, so that changes to the builder's method signatures cause a compile error if the protocol drifts. Alternatively, if the staged path is intended to eventually replace the chain builder, converge the two lowering paths so the protocol becomes unnecessary.
F05 — Duplicated FieldBuilderFromPresetDescriptor type
Location: packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts lines 147–162, packages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts lines 140–161
Issue: Both files define a FieldBuilderFromPresetDescriptor type with similar but not identical type parameters. The staged-contract-dsl.ts version takes (Descriptor, Name) and statically resolves codec IDs. The composed-authoring-helpers.ts version takes (Descriptor, Args, ConstraintName) and uses ResolveTemplateValue for arg-dependent resolution. The surrounding helper function types (FieldHelperFunctionWithNamedConstraint, FieldHelperFunctionWithoutNamedConstraint, FieldHelperFunction, FieldHelpersFromNamespace) are also duplicated between the two files with slight structural differences.
Suggestion: Extract the shared pattern into authoring-type-utils.ts. The composed version is strictly more general (it handles arg-dependent template resolution), so the staged-contract-dsl version could be defined as a specialization of it.
F06 — Several as unknown as casts lack justification comments
Location: packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts — 12 occurrences
Issue: The repo rule requires as unknown as casts to "always be accompanied by a comment explaining why it is necessary." While some casts in this file have comments (lines 848–849 for the SemanticContractBuilder cast; line 936–937 for the builder erasure), several others do not:
| Line | Cast | Missing justification |
|---|---|---|
| 1275 | as unknown as BuildStorageTable<...> |
No comment |
| 1309 | modelState as unknown as { name; table; fields } |
No comment |
| 1329 | (modelsPartial as unknown as Record<string, ModelDefinition>) |
No comment |
| 1346 | modelState as unknown as { name; table; fields; relations } |
No comment |
| 1375 | modelsPartial as unknown as BuildModels<Models> |
No comment |
| 1378 | models as unknown as Record<string, ModelDefinition> |
No comment |
| 1409 | as unknown as BuiltContract |
No comment |
| 1413 | return contract as unknown as ReturnType<...> |
No comment |
| 1603–1606 | tableBuilder as unknown as SqlTableBuilder<...> as unknown as TableBuilder<...> |
No comment (double cast) |
The staged-contract-dsl.ts file handles this better — lines 209–212 have a block comment explaining why chaining methods use as unknown as.
Suggestion: Add a brief justification comment for each cast in contract-builder.ts. The build() method casts (1275, 1309, 1329, etc.) appear to exist because the builder constructs objects imperatively from generic state but TypeScript can't narrow the resulting literal types through the spread operations — documenting this once as a block comment at the top of build() would cover most of them.
F07 — applyNaming lacks dedicated unit tests
Location: packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts lines 1475–1498
Issue: applyNaming is an exported function that converts camelCase/PascalCase identifiers to snake_case. It handles edge cases like consecutive uppercase runs (HTTPRequestLog → http_request_log). The only tests are two inline assertions in staged-contract-dsl.runtime.test.ts lines 64–65 (HTTPRequestLog and UserProfile). There are no tests for:
- Leading uppercase (
URL→url) - All-uppercase (
HTTP→http) - Already-lowercase input
- Single-character input
- Empty string
- Mixed number/letter boundaries (
user2Profile)
Suggestion: Add a focused describe('applyNaming') block in staged-contract-dsl.runtime.test.ts (or a dedicated test file) covering these edge cases. This function is load-bearing for the naming strategy feature (AC #3).
F08 — PSL interpreter changes are large and inline
Location: packages/2-sql/2-authoring/contract-psl/src/interpreter.ts
Issue: The PSL interpreter was updated to preserve pack-provided metadata, constraint names, and named type refs to maintain parity with the new staged TS surface. These changes are interleaved with existing interpreter logic in a large file. The parity test proves the output is correct, but the interpreter's growing responsibilities suggest it could benefit from extraction of the semantic-mapping logic into a helper module.
Suggestion: Not blocking, but consider extracting the constraint-name and pack-metadata resolution into a dedicated helper in a follow-up, to keep the interpreter focused on PSL → IR parsing.
F09 — No ADR for the staged DSL design decision
Location: N/A — missing artifact
Issue: The staged contract DSL introduces a new semantic intermediate representation, a new authoring surface, and a composition model that extends the pack architecture from ADR 170. The spec captures the design direction, but the spec is a transient project artifact (will be deleted at close-out). The architectural decision — choosing staged DSL over fluent builder, introducing SqlSemanticContractDefinition as the lowering target, and the separation of semantic vs. SQL concerns — should be captured in a durable ADR.
Suggestion: Author an ADR under docs/architecture docs/adrs/ documenting: (1) the staged DSL design and why it was chosen over alternatives, (2) the SqlSemanticContractDefinition IR and its role, (3) the composition model for pack-driven vocabulary.
F14 — BuiltStagedContract leaks authoring-surface identity
Location: packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts — BuiltStagedContract type; packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts — isStagedContractInput export
Issue: defineContract() returns SqlContractBuilder<CodecTypes> | BuiltStagedContract<Definition>. The name BuiltStagedContract is a tautology — once built, it's no longer staged. "Staged" describes an internal authoring concern (staged evaluation of builders, lazy tokens, deferred naming); the consumer should not know or care which DSL produced the contract. The CLI (load-ts-contract.ts) confirms this: it treats both paths identically as ContractIR.
Additionally, isStagedContractInput is exported publicly but only used inside defineContract() itself. It's an implementation detail of overload resolution.
Suggestion: Rename BuiltStagedContract to something that describes the output (e.g. SqlContractResult<Definition>) rather than the process. Make isStagedContractInput internal (unexport it from the public entrypoint).
F15 — Contract representations are converging from opposite sides (deferred)
Location: packages/2-sql/2-authoring/contract-ts/src/semantic-contract.ts — SqlSemanticContractDefinition; packages/1-framework/1-core/shared/contract/src/domain-types.ts — DomainModel
Issue: Three parallel type hierarchies describe "a contract organized by models": SqlSemanticContractDefinition (authoring side, pre-IR), ContractBase + DomainModel (runtime side, post-IR), and ContractIR / SqlContract (the serialized middle). The authoring-side types were built working forward from user intent; the runtime-side types were built working backward from query needs. Both arrive at the same shape: models with typed fields, named relations, and family-specific storage detail.
The current branch introduces SqlSemanticContractDefinition as a new type hierarchy. This is acceptable as a stepping stone — it creates a clean seam for later convergence — but the long-term direction is to unify these into one model-first type. PSL ↔ TS parity being fixture-based rather than structural is a symptom of this fragmentation: if both surfaces lowered to the same canonical type, parity would be enforced by the type system.
No action on this branch. Deferred to the contract-domain-extraction project (Milestone 5: ContractIR alignment).
See wip/system-design-review-findings.md — Findings 3 & 4 for the full analysis.
F17 — Model ordering changed in emitted contract.d.ts
Location: examples/prisma-next-demo/src/prisma/contract.d.ts — lines 144+
Issue: The emitted .d.ts now lists User before Post, which is the opposite of the previous ordering. The JSON emitter normalizes model order, but the .d.ts emitter appears to reflect insertion order from the builder. If the staged DSL processes models in a different order than the old chain builder, the .d.ts output will shift — causing spurious diffs for authors who re-emit after migrating to the staged surface.
Suggestion: Investigate whether the .d.ts emitter normalizes model order. If not, it should match the JSON emitter's normalization to ensure deterministic output regardless of authoring order.
F18 — Demo uses N+1 query pattern instead of ORM
Location: examples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.ts — entire file
Issue: This new demo file manually runs two separate SQL queries (one for users, one per-user for posts) and stitches results with Promise.all + .map(). This is a classic N+1 pattern that the ORM client (e.g. includeMany) should handle. As a demo file, this teaches the wrong pattern to authors. If the ORM surface isn't yet wired up in the no-emit path, the file should explain that limitation rather than silently demonstrating the anti-pattern.
Suggestion: Either use the ORM client if available, or add a comment explaining why the manual stitching is necessary in this demo and that includeMany is the intended approach.
F19 — Authoring types and functions should be extracted from framework-components.ts
Location: packages/1-framework/1-core/shared/contract/src/framework-components.ts — new types and functions added in this diff
Issue: This PR adds ~350 lines of authoring-specific types and runtime functions (AuthoringArgRef, AuthoringTemplateValue, AuthoringFieldPresetDescriptor, AuthoringTypeConstructorDescriptor, resolveAuthoringTemplateValue, validateAuthoringArgument, instantiateAuthoringFieldPreset, etc.) to framework-components.ts. These are conceptually distinct from the component descriptor and instance types that this module originally housed. The file now serves two purposes: defining the component framework (descriptors, instances, metadata) and defining the authoring contribution system (preset descriptors, template resolution, argument validation).
Suggestion: Extract the authoring types and functions into a dedicated authoring-descriptor-types.ts (or similar) within the same package. The component descriptor types can then import and re-export the AuthoringContributions type for the ComponentMetadata interface.
F20 — Doc comments removed from public framework component interfaces
Location: packages/1-framework/1-core/shared/contract/src/framework-components.ts — deleted JSDoc on ComponentDescriptor, FamilyDescriptor, TargetDescriptor, and others
Issue: This PR deletes multi-paragraph JSDoc from core framework interfaces (ComponentDescriptor, FamilyDescriptor, TargetDescriptor, AdapterDescriptor, DriverDescriptor, ExtensionPackDescriptor) and replaces them with one-line summaries or nothing. The removed documentation included template parameter descriptions, usage examples, and explanations of what each component type represents in the architecture. While the AGENTS.md rule "don't add comments if avoidable" applies to implementation comments, these are public API interfaces in the framework core that downstream consumers and contributors reference. The commit message says "Trim redundant JSDoc" but some removed content was genuinely useful context that the interface signatures alone don't convey.
Suggestion: Restore at minimum the template parameter docs and one-sentence purpose descriptions for each component descriptor interface. Examples and extended explanations can be trimmed if they're truly redundant with the architecture docs, but the interfaces themselves should remain self-documenting.
F21 — Test timeout increases may signal type performance regression
Location: test/utils/src/timeouts.ts — lines 3–6
Issue: typeScriptCompilation timeout increased from 8s to 12s (+50%) and default timeout from 100ms to 500ms (+400%). The spec's NFR explicitly states "authoring-time types should stay shallow enough to avoid significant TS server regressions." A 50% TS compilation timeout increase is a signal that the staged DSL's type-level machinery (particularly BuiltStagedContract<Definition> with its recursive mapped types) may be more expensive than the old builder's type inference. The default bump from 100ms to 500ms affects every test in the suite.
Suggestion: Investigate whether the timeout increases are caused by the new type-level machinery before making them permanent. Run tsc --diagnostics or tsc --generateTrace on the integration test suite to compare TS check times before and after the staged DSL changes. If the regression is confirmed, consider simplifying the deep mapped types in BuiltStagedContract.
F10 — Duplicated buildFieldPreset logic
Location: packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts lines 428–457 (buildFieldPreset), packages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts lines 334–356 (inline in createComposedFieldHelpers)
Issue: Both files instantiate a field preset from a descriptor and construct a ScalarFieldBuilder with the same pattern: call instantiateAuthoringFieldPreset, then spread the result into a ScalarFieldBuilder state object with conditional id/unique/default/executionDefault fields. The logic is nearly identical — the only difference is that staged-contract-dsl.ts extracts it as a named function while composed-authoring-helpers.ts inlines it in the createLeafHelper callback.
Suggestion: Extract a shared buildFieldBuilderFromPreset(descriptor, args, namedConstraintOptions) function (returning a ScalarFieldBuilder) and call it from both sites. This eliminates the duplication and ensures both paths stay in sync if the preset instantiation logic changes.
Nits
F11 — type Defined<T> = Present<T> is an unnecessary alias
Location: packages/2-sql/2-authoring/contract-ts/src/contract-builder.ts line 334
Issue: Defined<T> is defined as Present<T>, which is itself defined as Exclude<T, undefined> on line 216. Having two names for the same utility type adds confusion — Defined is used on lines 336–351 and Present is used elsewhere.
Suggestion: Pick one name and use it consistently. Defined is slightly clearer than Present.
F22 — Redundant collect() helper in demo — AsyncIterableResult has .toArray()
Location: examples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.ts — lines 4–9
Issue: The file defines a local collect<Row>(rows: AsyncIterable<Row>) helper to buffer an async iterable into an array. AsyncIterableResult already provides .toArray() for this purpose, and is also thenable (awaiting it calls .toArray() implicitly). The demo should showcase idiomatic usage patterns.
Suggestion: Replace collect(runtime.execute(plan)) with await runtime.execute(plan) or runtime.execute(plan).toArray().
F23 — Use ifDefined() utility for conditional spread
Location: packages/1-framework/1-core/shared/contract/src/framework-components.ts — instantiateAuthoringFieldPreset function
Issue: The function uses ...(descriptor.output.default === undefined ? {} : { default: ... }) and a similar pattern for executionDefault. The codebase has an ifDefined() utility for this exact conditional-spread pattern.
Suggestion: Replace with ...ifDefined('default', ...) for consistency with the rest of the codebase.
F24 — pgvector parity fixture split is unclear
Location: test/integration/test/authoring/parity/pgvector-named-type/contract.ts — lines 4–16
Issue: The diff splits embedding1536 into two objects: embedding1536Type (with codecId, nativeType, typeParams — passed to storageType()) and embedding1536Column (with codecId, nativeType, typeRef — passed to .column()). Both objects share codecId: 'pg/vector@1' and nativeType: 'vector', but serve different purposes. The typeRef: 'Embedding1536' on embedding1536Column tells the storage layer to resolve the column's type metadata from the named storage type rather than from its inline properties. The split is correct for testing the named-type-ref path, but it's not immediately obvious why two objects are needed.
Suggestion: Add a brief comment explaining that embedding1536Column uses typeRef to exercise the named-type-reference resolution path, while embedding1536Type defines the named type itself.
F12 — const typecheckOnly is unused in runtime tests
Location: packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.test.ts line 18
Issue: const typecheckOnly = process.env['PN_TYPECHECK_ONLY'] === 'true'; is declared but never used in any conditional logic within this test file. If this is intended for future use, it's dead code that should be removed until needed.
Suggestion: Remove the declaration if unused, or add the conditional guards it's meant to support.
Acceptance-Criteria Traceability
| # | Acceptance Criterion | Implementation | Evidence |
|---|---|---|---|
| 1 | Author can define model with fields/relations, attach .sql(), emit valid contract |
model() + field.* + rel.* + .sql() in staged-contract-dsl.ts; defineContract() routes through buildStagedContract in contract-builder.ts lines 1078–1083 |
contract-builder.staged-contract-dsl.test.ts — 'lowers inline ids and uniques while keeping sql focused on table/index/fk concerns' |
| 2 | Common scalar fields no longer require duplicate field-to-column declarations | field.column(textColumn) auto-derives column name from field key via applyNaming in staged-contract-lowering.ts lines 640–651 |
contract-builder.staged-contract-dsl.parity.test.ts — naming strategy tests |
| 3 | Table/column naming from root-level strategy with overrides | naming: { tables: 'snake_case', columns: 'snake_case' } in StagedContractInput, per-field .column('override') in staged-contract-dsl.ts lines 245–272 |
contract-builder.staged-contract-dsl.test.ts — inline id/unique test uses column('user_id') override; staged-contract-dsl.runtime.test.ts line 64 — applyNaming assertions |
| 4 | cols in .sql() exposes only column-backed scalar fields |
SqlContext<Fields> type restricts cols to FieldRefs<Fields> (scalar field builders only) in staged-contract-dsl.ts lines 862–865 |
contract-builder.staged-contract-dsl.test.ts — .sql(({ cols, constraints }) => ...) usage proves cols only contains scalar fields; type-level verification in integration type tests |
| 5 | Named PKs, uniques, indexes, FKs including composite | constraints.id(), constraints.unique(), constraints.index(), constraints.foreignKey() in staged-contract-dsl.ts lines 709–838; composite overloads accept arrays |
contract-builder.staged-contract-dsl.test.ts — inline ids with names, compound attribute ids; contract-builder.staged-contract-dsl.parity.test.ts — named indexes and FKs |
| 6 | Literal defaults, SQL defaults, generated defaults, named storage types | field.column(...).default(value), .defaultSql('now()'), field.generated(...), field.namedType(...) in staged-contract-dsl.ts lines 274–426 |
staged-contract-dsl.runtime.test.ts lines 29–44 — literal, function, generated defaults; staged-contract-lowering.runtime.test.ts lines 29–67 — named type resolution |
| 7 | Explicit reverse/query-surface relations with singular owning-side FK | rel.belongsTo() owns FK, rel.hasMany()/rel.hasOne() are reverse-side in staged-contract-dsl.ts lines 1304–1427; FK only generated from belongsTo .sql({ fk }) |
contract-builder.staged-contract-dsl.test.ts — ownership relation test cases (hasMany/hasOne) |
| 8 | Postgres→SQLite portability within ~10% changes | Target-specific code isolated to import and .sql() blocks |
contract-builder.staged-contract-dsl.portability.test.ts — explicit portability test comparing Postgres and SQLite contracts |
| 9 | Downstream schema()/sql() inference works from no-emit |
BuiltStagedContract<Definition> computes full contract type from definition generic in contract-builder.ts lines 662–687 |
Integration type tests (contract-builder.types.test-d.ts); expectTypeOf assertions in staged DSL tests. ⚠ Regression (F16): demo no-emit flow requires emitted contract.d.ts import and bracket access with null guards, contradicting the AC |
| 10 | Lowering pipeline can derive model/client helper types | SqlSemanticContractDefinition captures all model/field/relation data needed for type derivation in semantic-contract.ts |
Structural coverage via contract-builder.semantic-contract.test.ts; type inference proven in staged DSL type tests |
Summary of Findings
| ID | Severity | Title |
|---|---|---|
| F16 | Blocking | No-emit flow regressed: requires emitted types and bracket access |
| F01 | Blocking | Demo contract uses string-based namedType refs |
| F02 | Blocking | any type aliases in new test files |
| F13 | Blocking | Field presets bypass pack composition and are misplaced in the layer hierarchy |
| F03 | Non-blocking | contract-builder.ts is 1,883 lines |
| F04 | Non-blocking | SemanticContractBuilder type erasure pattern |
| F05 | Non-blocking | Duplicated FieldBuilderFromPresetDescriptor types |
| F06 | Non-blocking | as unknown as casts lack justification comments |
| F07 | Non-blocking | applyNaming lacks dedicated unit tests |
| F08 | Non-blocking | PSL interpreter changes are large and inline |
| F09 | Non-blocking | No ADR for the staged DSL design decision |
| F10 | Non-blocking | Duplicated buildFieldPreset logic |
| F14 | Non-blocking | BuiltStagedContract leaks authoring-surface identity |
| F15 | Non-blocking | Contract representations converging (deferred to contract-domain-extraction M5) |
| F17 | Non-blocking | Model ordering changed in emitted contract.d.ts |
| F18 | Non-blocking | Demo uses N+1 query pattern instead of ORM |
| F19 | Non-blocking | Authoring types should be extracted from framework-components.ts |
| F20 | Non-blocking | Doc comments removed from public framework component interfaces |
| F21 | Non-blocking | Test timeout increases may signal type performance regression |
| F11 | Nit | Defined<T> = Present<T> unnecessary alias |
| F12 | Nit | typecheckOnly variable is unused |
| F22 | Nit | Redundant collect() helper — AsyncIterableResult has .toArray() |
| F23 | Nit | Use ifDefined() utility for conditional spread |
| F24 | Nit | pgvector parity fixture type split is unclear |
examples/prisma-next-demo/src/queries/get-user-by-id-no-emit.ts
Outdated
Show resolved
Hide resolved
examples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.ts
Outdated
Show resolved
Hide resolved
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
wmadden
left a comment
There was a problem hiding this comment.
System Design Review — Findings
Observations from reviewing PR #261 (TS contract authoring redesign) against the system design review.
Finding 1: BuiltStagedContract leaks authoring-surface identity
The problem
defineContract() has overloads that return either SqlContractBuilder<CodecTypes> (old chain-builder path) or BuiltStagedContract<Definition> (new staged path). The implementation union is SqlContractBuilder<CodecTypes> | BuiltStagedContract<StagedContractInput>.
The name BuiltStagedContract is a tautology — once built, it's no longer staged. "Staged" describes an internal authoring concern (staged evaluation of builders, lazy tokens, deferred naming). The consumer should not know or care which authoring DSL produced the contract.
What really happens
The CLI (load-ts-contract.ts) does not branch on this type at all. It bundles the contract file, imports the default export, calls validatePurity(), and treats the result as ContractIR. The distinction is invisible to every downstream consumer.
The isStagedContractInput runtime guard is only used inside defineContract() itself. It should not be a public export.
Why it exists
BuiltStagedContract<Definition> computes literal types for table names, column names, codec IDs, etc. from the Definition type parameter, giving the user precise type inference inside their contract file. The old path returns SqlContractBuilder which .build()s into a similarly precise type. Both ultimately produce values assignable to SqlContract / ContractIR.
Recommendation
- Rename to something like
SqlContractResult<Definition>or simply unify the return type. The name should describe the output (a contract), not the process that created it. - Make
isStagedContractInputinternal. It's an implementation detail ofdefineContract()'s overload resolution.
Finding 2: Field presets bypass pack composition and are misplaced in the layer hierarchy
The invariant
All field presets should be contributed by framework components (family descriptors, target packs, extension packs) and flow through the composition machinery. None should be hardcoded into the authoring DSL, and none should be importable by lower-layer packages.
Two problems
Problem A: Presets are hardcoded onto the field export
The field object exported from staged-contract-dsl.ts has the portableSqlAuthoringFieldPresets spread directly onto it at module initialization time:
export const field = {
column: columnField,
generated: generatedField,
namedType: namedTypeField,
...portableFieldHelpers, // ← hardcoded, bypasses pack composition
};This means field.text, field.uuid, field.createdAt, etc. are available before any defineContract() call and outside any pack context. They don't flow through composeFieldNamespace() — they skip the composition seam entirely.
The bare field export should only contain structural helpers (column, generated, namedType) — things that are part of the DSL mechanics, not preset vocabulary. All presets should only be available through the composed helpers that the factory callback receives.
Problem B: Presets live at the wrong layer
The concrete preset definitions live in packages/2-sql/1-core/contract/src/authoring.ts — layer 1 (core) of the SQL domain. This is the lowest layer, importable by everything above it. The SQL core contract package is doing double duty:
- Providing type machinery for SQL targets (
SqlStorage,StorageColumn,SqlContract, validation, factories) — its intended purpose. - Defining runtime configuration data (concrete field preset descriptors with specific codec IDs and native types) — which doesn't belong here.
Because the presets are at the bottom layer, they're importable by the authoring layer (which hardcodes them onto field), the Postgres target pack (which re-exports them), and anything else in the SQL domain. This creates a gravitational pull: any SQL-wide configuration data that multiple packages need tends to land in @prisma-next/sql-contract because it's the one place everything can reach.
The structural fix
The SQL family descriptor — including its concrete preset definitions — should live at a high layer (e.g. packages/2-sql/9-family/):
packages/2-sql/9-family/
├── field-presets/ ← concrete preset descriptors (text, uuid, createdAt, etc.)
└── descriptor.ts ← SQL family descriptor
authoring.field: sqlFieldPresets
Layer 9 can import types from layers 1–8 (it needs AuthoringFieldPresetDescriptor, codec ID conventions, etc.), but nothing in layers 1–8 can import from layer 9. This means:
- The authoring DSL (layer 2) can't import presets. The hardcoding path structurally disappears — there's no temptation to spread presets onto the
fieldobject because the import is forbidden. - The core contract package (layer 1) goes back to being pure type machinery.
SqlStorage,StorageColumn,SqlContract, validation, factories — no runtime configuration data. - Presets enter the system only through framework composition. The family descriptor contributes them via its
authoring.fieldnamespace. Composition merges family, target, and extension pack contributions and produces the composed helpers. - Target packs don't re-export family presets. The Postgres pack currently re-exports
portableSqlAuthoringFieldPresetsas its ownauthoring.field. This is unnecessary — the family descriptor already contributes them. The target pack only needs to contribute target-specific additions (likeenum). - Also: "Portable" in
portableSqlAuthoringFieldPresetsis an implementation detail, it doesn't belong in the name
Current flow (wrong)
packages/2-sql/1-core/contract/src/authoring.ts ← defines presets (layer 1, importable by all)
↓ imported by
staged-contract-dsl.ts ← hardcodes onto field (layer 2)
↓ also imported by
packages/3-targets/postgres/src/core/authoring.ts ← re-exports as postgresAuthoringFieldPresets
↓ carried on
postgresTargetDescriptorMeta.authoring.field ← pack descriptor
↓ read by
composeFieldNamespace(target, extensionPacks) ← composition (redundant path)
Target flow
packages/2-sql/9-family/
├── field-presets/ ← concrete preset descriptors
└── descriptor.ts ← SQL family descriptor, carries presets on authoring.field
↓ read by
framework composition ← the only path presets enter the system
Lower layers (1–8) cannot import from layer 9. The authoring DSL receives presets exclusively through the composed helpers provided by the factory callback.
Descriptor types are fine
The descriptor types (AuthoringFieldPresetDescriptor, AuthoringFieldNamespace, AuthoringFieldPresetOutput) already live in the framework layer and are already family-agnostic. The concrete preset values (e.g. text → sql/text@1) are fine being SQL-specific — they're contributed by the SQL family descriptor. We are not trying to create a family-agnostic preset vocabulary; the framework provides the descriptor shape, families provide the concrete presets.
Findings 3 & 4: Contract representations are converging from opposite sides
The observation
Three parallel type hierarchies are emerging that all try to describe "a contract organized by models":
Authoring side — SqlSemanticContractDefinition (pre-ContractIR):
SqlSemanticContractDefinition
├── target, extensionPacks, capabilities
└── models: SqlSemanticModelNode[]
├── modelName, tableName
├── fields → { fieldName, columnName, descriptor, nullable, default? }
├── relations → { toModel, toTable, cardinality, on }
└── id?, uniques?, indexes?, foreignKeys?
Runtime side — ContractBase + DomainModel (post-ContractIR):
ContractBase
├── target, targetFamily, capabilities, extensionPacks, roots
└── models: Record<string, DomainModel>
├── fields → { nullable, codecId }
├── relations → { to, cardinality, on? }
└── storage: <family-specific extension>
Canonical artifact — ContractIR / SqlContract (the serialized middle):
ContractIR / SqlContract
├── storage: tables → columns, PKs, FKs, indexes
├── models: → { storage: { table }, fields: { column } }
├── relations: (separate top-level section)
└── mappings: modelToTable, fieldToColumn, etc.
The authoring-side types were built working forward: "describe what the user authored before materializing it to ContractIR." The runtime-side types were built working backward: "describe the model-level structure so runtime consumers don't have to reverse-engineer it from storage-first ContractIR." Both arrive at the same shape — models with typed fields and named relations, plus family-specific storage detail.
The deeper problem
Both runtime consumers and the emitter read the same two things: the domain layer (models, fields, relations) and the storage layer (tables/collections, columns/documents, constraints/indexes). The difference isn't what they read — it's what they do with it. One builds queries, the other serializes an artifact. There's no consumer that reads only one layer.
This means the three representations should converge into one: a model-first representation with a family-agnostic domain layer and a family-specific storage layer. The ContractIR's storage-first structure — with its separate storage, models, relations, and mappings sections that redundantly describe the same relationships from different angles — is what you'd get if you pre-computed derived views and baked them into the artifact. If the canonical representation already contains both layers in a coherent structure, those derived views are unnecessary.
PSL ↔ TS parity is a symptom
The two independent lowering paths (TS staged → SqlSemanticContractDefinition → ContractIR vs PSL → ContractIR directly) are a consequence of the fragmented representation. If there were a single canonical contract type, both authoring surfaces would lower to it, and parity would be structural rather than fixture-tested.
Relationship to contract-domain-extraction
This converges directly with the contract-domain-extraction project (Milestone 5: ContractIR alignment). That milestone plans to restructure ContractIR to carry model-level domain information. The insight from this review is stronger: rather than adding domain information alongside the existing storage-first structure, the representation should be unified — one model-first type that both authoring surfaces produce and all consumers read.
Scope for this branch
The current branch introduces SqlSemanticContractDefinition as a new type hierarchy. This is acceptable as a stepping stone:
- It's a well-defined intermediate form that creates a clean seam for the later convergence work
- PSL does not need to be refactored to target it in this branch
- The old chain-builder path can be deleted later (it's no longer the primary authoring surface)
The convergence of contract representations is deferred to the contract-domain-extraction project (M5), where the scope is already planned.
No action on this branch for Findings 3 & 4.
Summary
| # | Finding | Action (this branch) | Deferred to |
|---|---|---|---|
| 1 | BuiltStagedContract leaks authoring identity |
Rename; make isStagedContractInput internal |
— |
| 2 | Field presets bypass composition and are misplaced in the layer hierarchy | Extract SQL family package (layer 9); remove hardcoded presets from field; all presets flow through framework composition only |
— |
| 3 | PSL ↔ TS parity is fixture-based, not structural | No action | contract-domain-extraction M5 |
| 4 | Contract representations are converging from opposite sides | No action (semantic IR is an acceptable stepping stone) | contract-domain-extraction M5 |
Mark the SQL family package extraction prerequisite in task 3.9 as done (field presets moved to @prisma-next/family-sql at layer 3). Rewrite Milestone 5 from "Contract IR alignment" to "Unified contract representation" per ADR 182, covering the full replacement of ContractIR, SqlSemanticContractDefinition, and ContractBase with a single Contract<Storage, ModelStorage> type.
Record the plan to move @prisma-next/family-sql from layer 3 (tooling) to a dedicated top-level layer (layer 9) so that no other SQL package depends on the family descriptor.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (4)
docs/architecture docs/adrs/ADR 178 - Staged contract DSL for SQL TS authoring.md (2)
153-180: Keep ADR scope at decision/constraints; move lowering mechanics out.This section reads like implementation design (phase-by-phase lowering and diagnostic categories). Consider keeping only architectural constraints here and moving operational lowering details to subsystem docs.
Based on learnings: “ADRs in prisma/prisma-next should document architectural design decisions and the key constraints/assumptions only. Do not include implementation algorithms, emitter derivation logic, or step-by-step derivation/migration rules.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture` docs/adrs/ADR 178 - Staged contract DSL for SQL TS authoring.md around lines 153 - 180, The ADR includes implementation-level lowering mechanics that belong in subsystem docs; remove the phase-by-phase lowering sequence and diagnostic list from this ADR and instead summarize only the architectural decision and constraints (e.g., that a staged DSL lowers via an intermediate representation called SqlSemanticContractDefinition and that there must be a clear seam to SqlContractBuilder), then move the detailed steps and validation/diagnostic categories (the lowering pipeline referencing buildStagedSemanticContractDefinition and buildSqlContractFromSemanticDefinition, and the enumerated errors) into an implementation/design doc for the SQL contract builder/subsystem.
172-173: Avoid milestone/project-plan references in durable ADR text.Referencing “contract-domain-extraction project, Milestone 5” can quickly become stale. Prefer stable ADR/package docs links or a durable issue tracker reference with clear permanence.
Based on learnings: “Do not reference transient project artifacts … from durable system documentation … link to stable architecture docs or package-level documentation that ages well.”
Also applies to: 206-207
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture` docs/adrs/ADR 178 - Staged contract DSL for SQL TS authoring.md around lines 172 - 173, The ADR text references a transient project milestone ("contract-domain-extraction project, Milestone 5"), which is fragile; update the sentence mentioning PSL lowering to SqlSemanticContractDefinition and ContractIR to remove the milestone reference and instead link or point to a stable, durable reference (e.g., an ADR, package-level docs, or a permanent issue/architecture doc) — replace "contract-domain-extraction project, Milestone 5" with a stable identifier or hyperlink to the authoritative ADR/package docs that describe the shared lowering target and roadmap for PSL → SqlSemanticContractDefinition → ContractIR; apply the same change to the other occurrences noted (lines 206-207).docs/architecture docs/adrs/ADR 179 - Unified contract representation.md (1)
154-162: Trim implementation-specific validation flow from ADR body.This section names concrete functions and source paths (
validateContractDomain()andpackages/.../validate-domain.ts), which makes the ADR read like implementation documentation rather than architectural decision record.Based on learnings: “ADRs in prisma/prisma-next should document architectural design decisions and the key constraints/assumptions only. Do not include implementation algorithms … or step-by-step derivation/migration rules.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture` docs/adrs/ADR 179 - Unified contract representation.md around lines 154 - 162, The ADR currently contains implementation-specific names (validateContractDomain(), validateMongoContract(), validateMongoStorage(), validateContract) and file paths; remove those concrete function and path references and rephrase the section to describe the two-pass validation at an architectural level: state that the framework performs a domain/structural validation pass and then invokes a family-provided storage-validator pass (e.g., a Mongo-specific or SQL-specific validator) without naming exact functions or source files, and replace the example sequence with a brief, generic description of the flow and intent only.examples/prisma-next-demo/src/orm-client/collections.ts (1)
13-15: Consider escaping LIKE wildcards in user-provided patterns.The
domainparameter is interpolated directly into the LIKE pattern. If a caller passes a value containing%or_, it would act as a wildcard rather than a literal character, potentially causing unexpected matches.For production code, consider escaping LIKE special characters. For this demo, adding a brief comment noting the limitation would be educational.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/prisma-next-demo/src/orm-client/collections.ts` around lines 13 - 15, emailDomain currently interpolates the caller-supplied domain into an ILIKE pattern which allows user-provided % or _ to act as wildcards; update emailDomain to either escape SQL LIKE special chars (%) and (_) in the domain before constructing the pattern (and use the ORM/DB escape mechanism or an explicit ESCAPE clause if required), or—if you want to keep this as a demo—add a short comment inside the emailDomain method explaining that domain values are not escaped and may treat % and _ as wildcards; refer to the emailDomain method and the where((user) => user.email.ilike(...)) call when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/architecture` docs/adrs/ADR 178 - Staged contract DSL for SQL TS
authoring.md:
- Around line 157-165: The fenced pipeline block starting at the diagram
(containing model(), field.*, rel.*, defineContract,
buildStagedSemanticContractDefinition, buildSqlContractFromSemanticDefinition)
lacks a language tag which triggers MD040; add a language indicator to the
opening fence (for example ```text) so the block becomes a tagged fenced code
block and tooling/linters will recognize it—ensure the opening fence immediately
before the line "model() + field.* + rel.* ..." includes the chosen tag and
leave the rest of the block unchanged.
In `@docs/architecture` docs/adrs/ADR 179 - Unified contract representation.md:
- Around line 49-78: Replace non-ASCII characters and verbose labels in the
Mermaid diagrams: change em-dashes (—) to ASCII hyphens (-), replace angle quote
glyphs (‹ ›) with plain text or remove them, and shorten node labels to concise
ASCII-friendly names (e.g., edit "TS_STAGED(contract.ts — staged)",
"TS_CHAIN(contract.ts — chain)", "Contract(\"Contract‹Storage,
ModelStorage›\")", "JSON(contract.json + contract.d.ts)" and similar nodes like
PSL, AST, SCI, SSCD, ContractIR, SC, DM, CONSUMERS) so labels are short and
renderer-safe; keep detailed descriptions in surrounding Markdown rather than
inside node labels.
In `@examples/prisma-next-demo/prisma/contract.ts`:
- Around line 58-66: The contract declares generated UUID primary keys via
field.generated(uuidv4()).id() in the User and Post models but the capabilities
block only includes 'defaults.now'; add the matching capability
'defaults.uuidv4': true to the capabilities.postgres map so function defaults
are declared. Update the capabilities object (the postgres entry in the
capabilities constant) to include 'defaults.uuidv4': true to match the use of
uuidv4() in the generated id fields.
In `@packages/1-framework/1-core/shared/contract/src/framework-authoring.ts`:
- Around line 291-306: The literal branch currently returns { kind: 'literal',
value: resolveAuthoringTemplateValue(template.value, args) } even when that
value is undefined; update the template.kind === 'literal' handling to capture
the resolved value via resolveAuthoringTemplateValue(template.value, args), and
if that resolved value === undefined throw a descriptive Error (e.g., "Literal
authoring default resolved to undefined") so literal defaults cannot produce {
value: undefined }; otherwise return the { kind: 'literal', value } object as
before.
- Around line 230-245: The current arity check uses requiredCount which
understates the minimum when an optional descriptor precedes a later required
one; update validateAuthoringHelperArguments so the minimum positional arity is
computed as the index of the last non-optional descriptor + 1 (e.g., find last
index where descriptor.optional is false, let minArgs = lastIndex === -1 ? 0 :
lastIndex + 1) and then validate args.length against minArgs and expected.length
(args.length < minArgs || args.length > expected.length). Keep the rest of the
logic (expected array, validateAuthoringArgument calls, and the error message)
but use minArgs in the error text to reflect the corrected minimum.
In `@packages/1-framework/1-core/shared/contract/src/framework-components.ts`:
- Around line 3-5: Remove the unintended public re-export "export * from
'./framework-authoring';" from this module and keep only the local type import
"import type { AuthoringContributions } from './framework-authoring';"; surface
the public API for framework-authoring from the package's exports/ layer instead
(do not re-export from this file), so update this file to only import the type
and ensure any required re-exports are added under the package exports/ entry
point.
In `@packages/1-framework/3-tooling/cli/src/load-ts-contract.ts`:
- Around line 3-4: Replace the import of join from 'node:path' with pathe to
ensure cross-runtime compatibility: update the import statement that currently
reads import { join } from 'node:path' to import { join } from 'pathe', leaving
the rest of the file (including any uses of join and the existing import of
pathToFileURL from 'node:url') unchanged so codepaths like join(...) in
load-ts-contract.ts continue to work across Node, Deno, Bun and edge runtimes.
---
Nitpick comments:
In `@docs/architecture` docs/adrs/ADR 178 - Staged contract DSL for SQL TS
authoring.md:
- Around line 153-180: The ADR includes implementation-level lowering mechanics
that belong in subsystem docs; remove the phase-by-phase lowering sequence and
diagnostic list from this ADR and instead summarize only the architectural
decision and constraints (e.g., that a staged DSL lowers via an intermediate
representation called SqlSemanticContractDefinition and that there must be a
clear seam to SqlContractBuilder), then move the detailed steps and
validation/diagnostic categories (the lowering pipeline referencing
buildStagedSemanticContractDefinition and
buildSqlContractFromSemanticDefinition, and the enumerated errors) into an
implementation/design doc for the SQL contract builder/subsystem.
- Around line 172-173: The ADR text references a transient project milestone
("contract-domain-extraction project, Milestone 5"), which is fragile; update
the sentence mentioning PSL lowering to SqlSemanticContractDefinition and
ContractIR to remove the milestone reference and instead link or point to a
stable, durable reference (e.g., an ADR, package-level docs, or a permanent
issue/architecture doc) — replace "contract-domain-extraction project, Milestone
5" with a stable identifier or hyperlink to the authoritative ADR/package docs
that describe the shared lowering target and roadmap for PSL →
SqlSemanticContractDefinition → ContractIR; apply the same change to the other
occurrences noted (lines 206-207).
In `@docs/architecture` docs/adrs/ADR 179 - Unified contract representation.md:
- Around line 154-162: The ADR currently contains implementation-specific names
(validateContractDomain(), validateMongoContract(), validateMongoStorage(),
validateContract) and file paths; remove those concrete function and path
references and rephrase the section to describe the two-pass validation at an
architectural level: state that the framework performs a domain/structural
validation pass and then invokes a family-provided storage-validator pass (e.g.,
a Mongo-specific or SQL-specific validator) without naming exact functions or
source files, and replace the example sequence with a brief, generic description
of the flow and intent only.
In `@examples/prisma-next-demo/src/orm-client/collections.ts`:
- Around line 13-15: emailDomain currently interpolates the caller-supplied
domain into an ILIKE pattern which allows user-provided % or _ to act as
wildcards; update emailDomain to either escape SQL LIKE special chars (%) and
(_) in the domain before constructing the pattern (and use the ORM/DB escape
mechanism or an explicit ESCAPE clause if required), or—if you want to keep this
as a demo—add a short comment inside the emailDomain method explaining that
domain values are not escaped and may treat % and _ as wildcards; refer to the
emailDomain method and the where((user) => user.email.ilike(...)) call when
making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: ffd92061-da47-4b93-ab58-fc203b8185b7
📒 Files selected for processing (23)
docs/Architecture Overview.mddocs/architecture docs/adrs/ADR 178 - Staged contract DSL for SQL TS authoring.mddocs/architecture docs/adrs/ADR 179 - Unified contract representation.mdexamples/prisma-next-demo/prisma-next.config.tsexamples/prisma-next-demo/prisma/contract.tsexamples/prisma-next-demo/src/orm-client/client.tsexamples/prisma-next-demo/src/orm-client/collections.tsexamples/prisma-next-demo/src/prisma-no-emit/context.tsexamples/prisma-next-demo/src/queries/get-user-by-id-no-emit.tsexamples/prisma-next-demo/src/queries/get-user-posts-no-emit.tsexamples/prisma-next-demo/src/queries/get-users-no-emit.tsexamples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.tspackage.jsonpackages/1-framework/1-core/shared/contract/package.jsonpackages/1-framework/1-core/shared/contract/src/exports/framework-components.tspackages/1-framework/1-core/shared/contract/src/framework-authoring.tspackages/1-framework/1-core/shared/contract/src/framework-components.tspackages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.tspackages/1-framework/3-tooling/cli/src/control-api/contract-enrichment.tspackages/1-framework/3-tooling/cli/src/load-ts-contract.tspackages/1-framework/3-tooling/cli/test/commands/migration-apply.test.tspackages/1-framework/3-tooling/cli/test/control-api/contract-enrichment.test.tspackages/1-framework/3-tooling/cli/vitest.config.ts
✅ Files skipped from review due to trivial changes (6)
- packages/1-framework/3-tooling/cli/vitest.config.ts
- examples/prisma-next-demo/src/queries/get-users-no-emit.ts
- package.json
- packages/1-framework/3-tooling/cli/test/commands/migration-apply.test.ts
- docs/Architecture Overview.md
- packages/1-framework/1-core/shared/contract/package.json
🚧 Files skipped from review as they are similar to previous changes (5)
- examples/prisma-next-demo/prisma-next.config.ts
- examples/prisma-next-demo/src/queries/get-user-posts-no-emit.ts
- examples/prisma-next-demo/src/queries/get-users-with-posts-no-emit.ts
- packages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.ts
- packages/1-framework/1-core/shared/contract/src/exports/framework-components.ts
docs/architecture docs/adrs/ADR 178 - Staged contract DSL for SQL TS authoring.md
Outdated
Show resolved
Hide resolved
docs/architecture docs/adrs/ADR 182 - Unified contract representation.md
Show resolved
Hide resolved
packages/1-framework/1-core/shared/contract/src/framework-components.ts
Outdated
Show resolved
Hide resolved
…ents Move authoring symbol re-exports to the exports layer so framework-components.ts only imports what it needs locally. Update the test file to import authoring symbols directly.
…al defaults Fix validateAuthoringHelperArguments to compute minimum arity from the position of the last required slot, not the count of required slots. This prevents [optional, required] from accepting 1 arg. Add a guard in resolveAuthoringDefaultTemplate to throw when a literal default resolves to undefined instead of silently emitting an invalid contract shape.
Add missing defaults.uuidv4 capability to demo contract. Replace node:path with pathe in load-ts-contract.ts for cross-runtime consistency. Clarify the emitted-type dependency comment in the no-emit flow context.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.md (1)
51-53: Prefer additive framing in walkthrough prose.Consider rewriting “No table or column layer” into additive wording (e.g., “The staged DSL adds a model-first layer where...”) to align with docs style guidance.
As per coding guidelines: in walkthrough documentation, phrase additive changes as additions (avoid “no X → Y” phrasing).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture` docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.md around lines 51 - 53, Rewrite the negative phrasing in the first walkthrough bullet into an additive, model-first framing: replace “No table or column layer. The author writes `field.text()`, not `t.column('email', textColumn)`.” with a positive sentence that introduces the staged DSL as adding a model-first layer (e.g., “The staged DSL adds a model-first layer where authors use `field.text()` and column names are derived from field keys via a naming strategy; table/column storage names are only touched when overriding.”) Keep references to `field.text()`, `t.column('email', textColumn)`, `User.ref('id')`, `.sql()` block, and `rel.belongsTo(User, ...)` intact so the walkthrough still mentions typed references, `.sql()` for table mapping/indexes, and relation syntax.examples/prisma-next-demo/prisma/contract.ts (1)
13-15: Consider parity coverage for the enum type and named FK.In the provided fixture
test/integration/test/authoring/parity/pgvector-named-type/contract.ts:1-34, only the named vectortypeRefpath is exercised. Since this demo also depends onuser_typeandpost_userId_fkey, a small TS/PSL parity case for those two details would make regressions here easier to catch.Also applies to: 45-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/prisma-next-demo/prisma/contract.ts` around lines 13 - 15, Add parity coverage for the enum and named foreign-key by exercising the user_type enum and post_userId_fkey in the same fixture/contract; update the types object (symbol: types) and related test setup to include and use user_type (symbol: user_type) and ensure a PSL/TS case references the post_userId_fkey (symbol: post_userId_fkey) path so both the named FK and enum mapping are validated alongside the existing Embedding1536/vector(1536) case.packages/1-framework/1-core/shared/contract/src/framework-authoring.ts (2)
275-279: UseifDefined()fortypeParamshere too.This is the one conditional spread in the file that still bypasses the repo helper.
As per coding guidelines, "Use `ifDefined()` from `@prisma-next/utils/defined` for conditional object spreads instead of inline conditional spread patterns."♻️ Suggested cleanup
return { codecId: template.codecId, nativeType, - ...(typeParams === undefined ? {} : { typeParams }), + ...ifDefined('typeParams', typeParams), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/shared/contract/src/framework-authoring.ts` around lines 275 - 279, The return object currently uses an inline conditional spread for typeParams ("...(typeParams === undefined ? {} : { typeParams })"); replace this with the repo helper by importing ifDefined from "@prisma-next/utils/defined" and use ifDefined({ typeParams }) in the returned object (keep codecId: template.codecId and nativeType as-is) so the conditional spread follows the coding guideline; ensure you add the import for ifDefined at the top of the file.
1-369: Move this shared-plane authoring code undersrc/core/and split it by concern.Since this is a brand-new shared-plane surface, introducing it as a root-level
src/framework-authoring.tsmonolith makes the package layout harder to follow. Please place it undersrc/core/and separate the exported types, validators, and instantiation helpers before this API settles.As per coding guidelines, "
packages/**/src/**: Structure multi-plane packages with split sources by plane:src/core/**for shared" and "packages/**/src/**/*.ts: Organize shared plane package source code with separate files: types.ts, validators.ts (optional), and factories.ts (optional)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/shared/contract/src/framework-authoring.ts` around lines 1 - 369, This file is a monolithic shared-plane implementation and should be moved into src/core/ and split by concern: extract all type-only exports (AuthoringArgRef, AuthoringTemplateValue, AuthoringArgumentDescriptor, AuthoringStorageTypeTemplate, AuthoringTypeConstructorDescriptor, AuthoringFieldPresetDescriptor, AuthoringContributions, Authoring* types and namespaces) into src/core/types.ts; move validation/predicate functions (isAuthoringArgRef, isAuthoringTypeConstructorDescriptor, isAuthoringFieldPresetDescriptor, isAuthoringTemplateRecord, validateAuthoringArgument, validateAuthoringHelperArguments) into src/core/validators.ts; and put template resolution and factory/instantiation helpers (resolveAuthoringTemplateValue, resolveAuthoringStorageTypeTemplate, resolveAuthoringColumnDefaultTemplate, instantiateAuthoringTypeConstructor, instantiateAuthoringFieldPreset) into src/core/factories.ts; update imports/exports so other modules import from package root barrel (re-export from src/core/index.ts) and ensure utility import ifDefined remains correct — use the same function names (e.g., resolveAuthoringTemplateValue, instantiateAuthoringFieldPreset) to locate and split the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/architecture` docs/adrs/ADR 181 - Staged contract DSL for SQL TS
authoring.md:
- Around line 172-173: Remove the transient milestone reference in the sentence
mentioning lowering to SqlSemanticContractDefinition and ContractIR; replace
"contract-domain-extraction project, Milestone 5" with either a link to a stable
architecture/package doc or a neutral "future work" note (e.g., "deferred to
future contract-domain-extraction work" or "see contract-domain-extraction
architecture docs"), update the sentence to read without milestone labels while
preserving the intent that PSL may lower to SqlSemanticContractDefinition
instead of ContractIR, and apply the same edit to ADRs 206–207.
- Around line 153-186: The ADR currently contains implementation-level lowering
and derivation details (the lowering pipeline, SqlSemanticContractDefinition,
buildStagedSemanticContractDefinition, buildSqlContractFromSemanticDefinition,
SqlContractResult, and ContractIR mechanics); extract those algorithmic and
transformation sections into a new subsystem doc (or existing subsystem docs)
titled e.g. "SQL contract lowering and semantic IR", move the detailed
walkthrough, validation rules, and type-derivation mechanics there, replace the
ADR content with a concise summary of the decision (that we use a staged
lowering via a SqlSemanticContractDefinition as a seam) plus links to the new
subsystem doc for operational details, and apply the same treatment to ADRs
204–207 as noted.
In `@packages/1-framework/1-core/shared/contract/src/framework-authoring.ts`:
- Around line 94-124: The current type guards are too permissive; update
isAuthoringArgRef to ensure (value as { index?: unknown }).index is a finite,
non-negative integer and (when present) that (value as { path?: unknown }).path
is an array of strings, and update isAuthoringTypeConstructorDescriptor and
isAuthoringFieldPresetDescriptor to validate that the required descriptor.output
property exists and matches the expected Arktype shape instead of just checking
kind; use Arktype validators (e.g., is/validate helpers) to perform these checks
and return the refined types so downstream code that accesses descriptor.output
and iterates template.path is safe.
- Around line 183-186: The current validation for descriptor.kind ===
'stringArray' uses value.some(...), which skips holes in sparse arrays (so
[,'id'] passes); update the check to explicitly iterate the array (e.g., with
for...of over the value) and throw the same Error when any entry is not a string
(this will surface holes as undefined and be rejected). Keep the existing
Array.isArray check and error message that references path, and modify the block
handling descriptor.kind === 'stringArray' to use the explicit iteration over
value instead of value.some.
---
Nitpick comments:
In `@docs/architecture` docs/adrs/ADR 181 - Staged contract DSL for SQL TS
authoring.md:
- Around line 51-53: Rewrite the negative phrasing in the first walkthrough
bullet into an additive, model-first framing: replace “No table or column layer.
The author writes `field.text()`, not `t.column('email', textColumn)`.” with a
positive sentence that introduces the staged DSL as adding a model-first layer
(e.g., “The staged DSL adds a model-first layer where authors use `field.text()`
and column names are derived from field keys via a naming strategy; table/column
storage names are only touched when overriding.”) Keep references to
`field.text()`, `t.column('email', textColumn)`, `User.ref('id')`, `.sql()`
block, and `rel.belongsTo(User, ...)` intact so the walkthrough still mentions
typed references, `.sql()` for table mapping/indexes, and relation syntax.
In `@examples/prisma-next-demo/prisma/contract.ts`:
- Around line 13-15: Add parity coverage for the enum and named foreign-key by
exercising the user_type enum and post_userId_fkey in the same fixture/contract;
update the types object (symbol: types) and related test setup to include and
use user_type (symbol: user_type) and ensure a PSL/TS case references the
post_userId_fkey (symbol: post_userId_fkey) path so both the named FK and enum
mapping are validated alongside the existing Embedding1536/vector(1536) case.
In `@packages/1-framework/1-core/shared/contract/src/framework-authoring.ts`:
- Around line 275-279: The return object currently uses an inline conditional
spread for typeParams ("...(typeParams === undefined ? {} : { typeParams })");
replace this with the repo helper by importing ifDefined from
"@prisma-next/utils/defined" and use ifDefined({ typeParams }) in the returned
object (keep codecId: template.codecId and nativeType as-is) so the conditional
spread follows the coding guideline; ensure you add the import for ifDefined at
the top of the file.
- Around line 1-369: This file is a monolithic shared-plane implementation and
should be moved into src/core/ and split by concern: extract all type-only
exports (AuthoringArgRef, AuthoringTemplateValue, AuthoringArgumentDescriptor,
AuthoringStorageTypeTemplate, AuthoringTypeConstructorDescriptor,
AuthoringFieldPresetDescriptor, AuthoringContributions, Authoring* types and
namespaces) into src/core/types.ts; move validation/predicate functions
(isAuthoringArgRef, isAuthoringTypeConstructorDescriptor,
isAuthoringFieldPresetDescriptor, isAuthoringTemplateRecord,
validateAuthoringArgument, validateAuthoringHelperArguments) into
src/core/validators.ts; and put template resolution and factory/instantiation
helpers (resolveAuthoringTemplateValue, resolveAuthoringStorageTypeTemplate,
resolveAuthoringColumnDefaultTemplate, instantiateAuthoringTypeConstructor,
instantiateAuthoringFieldPreset) into src/core/factories.ts; update
imports/exports so other modules import from package root barrel (re-export from
src/core/index.ts) and ensure utility import ifDefined remains correct — use the
same function names (e.g., resolveAuthoringTemplateValue,
instantiateAuthoringFieldPreset) to locate and split the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: b5a122f9-9703-4cd0-a9bf-4c442efcd517
⛔ Files ignored due to path filters (1)
projects/contract-domain-extraction/plan.mdis excluded by!projects/**
📒 Files selected for processing (10)
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.mddocs/architecture docs/adrs/ADR 182 - Unified contract representation.mdexamples/prisma-next-demo/prisma/contract.tsexamples/prisma-next-demo/src/prisma-no-emit/context.tspackages/1-framework/1-core/shared/contract/src/exports/framework-components.tspackages/1-framework/1-core/shared/contract/src/framework-authoring.tspackages/1-framework/1-core/shared/contract/src/framework-components.tspackages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.tspackages/1-framework/3-tooling/cli/src/load-ts-contract.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.ts
✅ Files skipped from review due to trivial changes (3)
- packages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.ts
- docs/architecture docs/adrs/ADR 182 - Unified contract representation.md
- packages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.helpers.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/1-framework/1-core/shared/contract/src/framework-components.ts
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.md
Outdated
Show resolved
Hide resolved
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.md
Outdated
Show resolved
Hide resolved
Verify that literal table, column, and model name types survive the staged contract DSL pipeline end-to-end (defineContract -> validateContract -> sql()/schema()). Single-model assertions pass; multi-model assertions fail, documenting the acceptance criterion gap for the no-emit path. Failing assertions are marked with @ts-expect-error so CI stays green while serving as executable acceptance criteria — when the type pipeline is fixed (ADR 182), removing these directives will be required.
isAuthoringArgRef now validates index is a non-negative integer and path is a string array. Descriptor guards now require output property. stringArray validation uses for...of to catch sparse array holes that Array.some() skips.
…e task Rewrite the ADR to address six issues identified in review: remove "staged" from title and body, restructure the middle section from a flat reference into problem-motivated narrative, remove transient project references, reframe known gaps as architectural limitations, explain jargon on first use, and add motivation before mechanics in each subsection. Add a rename task to the project plan (Milestone 3) to remove "Staged" from all public types, file names, and internal helpers.
These were implementation status items (preset bypass, no-emit type propagation, IR convergence), not consequences of the architectural decision. The costs section already captures the real trade-offs.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/architecture` docs/adrs/ADR 181 - Staged contract DSL for SQL TS
authoring.md:
- Around line 146-154: The lowering pipeline diagram uses Unicode arrows; update
the diagram text to use ASCII arrows instead (e.g., replace "→" with "->" or
"-->") so labels comply with the guidelines; specifically change the arrows on
the lines containing "model() + field.* + rel.*",
"buildSemanticContractDefinition() → SqlSemanticContractDefinition", and
"buildSqlContractFromSemanticDefinition() → ContractIR" to use ASCII arrows
while leaving the rest of the diagram unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 09b7071f-c7bb-4b57-bf4b-b3af3b490ec8
⛔ Files ignored due to path filters (1)
projects/ts-contract-authoring-redesign/plan.mdis excluded by!projects/**
📒 Files selected for processing (5)
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.mdexamples/prisma-next-demo/src/prisma-no-emit/context.tspackages/1-framework/1-core/shared/contract/src/framework-authoring.tspackages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.tstest/integration/test/staged-dsl-type-inference.test-d.ts
✅ Files skipped from review due to trivial changes (1)
- packages/1-framework/1-core/shared/contract/src/framework-authoring.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/1-framework/1-core/shared/contract/test/framework-components.authoring.test.ts
- examples/prisma-next-demo/src/prisma-no-emit/context.ts
docs/architecture docs/adrs/ADR 181 - Staged contract DSL for SQL TS authoring.md
Show resolved
Hide resolved
…contracts ValidateSqlStageSpec in the .sql() parameter type forced TypeScript to resolve the validation during callback return type inference. For models with foreign key constraints, this added enough type depth to widen table name literals (e.g. "post" -> string). Moving validation to the return type position lets inference complete first with full literal precision, then checks validity lazily. This fixes the no-emit flow acceptance criterion: downstream schema() / sql() inference now works from inferred contract types without needing the emitted Contract type from contract.d.ts.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts (1)
93-125:⚠️ Potential issue | 🟠 MajorOnly allow the named-type fallback for
typeRefcolumns.Lines 123-125 currently treat any codec with
planTypeOperationsas safe to skipexpandNativeType. For inlinetypeParamscolumns, that drops the declared parameters and emits barenativeTypeinstead of failing fast.Suggested fix
- const expanded = expandParameterizedTypeSql(resolved, codecHooks); + const expanded = expandParameterizedTypeSql( + resolved, + column.typeRef !== undefined, + codecHooks, + ); if (expanded !== null) { return expanded; } @@ function expandParameterizedTypeSql( column: Pick<StorageColumn, 'nativeType' | 'codecId' | 'typeParams'>, + allowNamedTypeFallback: boolean, codecHooks: Map<string, CodecControlHooks>, ): string | null { @@ const hooks = codecHooks.get(column.codecId); if (!hooks?.expandNativeType) { - if (hooks?.planTypeOperations) { + if (allowNamedTypeFallback && hooks?.planTypeOperations) { return null; } throw new Error( `Column declares typeParams for nativeType "${column.nativeType}" ` + `but no expandNativeType hook is registered for codecId "${column.codecId}". ` +That same silent fallback will also make the mirrored
buildExpectedFormatType()path under-report the declared type.As per coding guidelines, "Do not treat missing type parameters as a successful match when the contract declares them."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts` around lines 93 - 125, expandParameterizedTypeSql currently allows a silent fallback when a codec has planTypeOperations, which drops declared typeParams for inline columns; update expandParameterizedTypeSql to accept the column's typeRef (e.g., change its Pick to include 'typeRef') and only return null for the planTypeOperations fallback when column.typeRef is truthy (i.e., it's a named/typeRef column); otherwise, if typeParams exist and hooks.expandNativeType is missing, throw an error to avoid emitting bare nativeType. Apply the same guarding change to the mirrored buildExpectedFormatType path so both paths enforce that missing expandNativeType is only tolerated for typeRef columns.
🧹 Nitpick comments (3)
packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts (3)
317-322: UseifDefined()for conditional object spreads.The conditional spread patterns here could use the already-imported
ifDefined()helper for consistency with the rest of the codebase.♻️ Suggested refactor using ifDefined
return new ScalarFieldBuilder({ ...this.state, - ...(spec.column ? { columnName: spec.column } : {}), - ...(idSpec ? { id: { name: idSpec.name } } : {}), - ...(uniqueSpec ? { unique: { name: uniqueSpec.name } } : {}), + ...ifDefined('columnName', spec.column), + ...(idSpec ? { id: { name: idSpec.name } } : {}), + ...(uniqueSpec ? { unique: { name: uniqueSpec.name } } : {}), } as unknown as ApplyFieldSqlSpec<State, Spec>);As per coding guidelines: "Use
ifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` around lines 317 - 322, Replace the inline conditional spreads in the ScalarFieldBuilder construction with the ifDefined() helper: instead of using ...(spec.column ? { columnName: spec.column } : {}) and similar for idSpec and uniqueSpec, wrap each optional object in ifDefined(...) so the final object merges this.state and ifDefined({ columnName: spec.column }), ifDefined(idSpec ? { id: { name: idSpec.name } } : undefined) and ifDefined(uniqueSpec ? { unique: { name: uniqueSpec.name } } : undefined) and cast to ApplyFieldSqlSpec; update the ScalarFieldBuilder({...}) call to use these ifDefined-wrapped values.
1100-1109: Type guard doesn't verifysourceproperty.The
isLazyRelationModelNametype guard assertsvalue is LazyRelationModelName<string>which includessource: 'lazyToken', but the function only checkskindandresolve. While unlikely to cause issues in practice (since thekindvalue is very specific), adding asourcecheck would make the guard more complete.♻️ Suggested addition
function isLazyRelationModelName(value: unknown): value is LazyRelationModelName<string> { return ( typeof value === 'object' && value !== null && 'kind' in value && (value as { kind?: unknown }).kind === 'lazyRelationModelName' && + 'source' in value && + (value as { source?: unknown }).source === 'lazyToken' && 'resolve' in value && typeof (value as { resolve?: unknown }).resolve === 'function' ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` around lines 1100 - 1109, The type guard isMissing the `source` check for LazyRelationModelName: update the isLazyRelationModelName function to also verify that value has a `source` property equal to the expected literal (e.g., 'lazyToken'); specifically, in the function that currently checks `kind === 'lazyRelationModelName'` and `resolve` (isLazyRelationModelName), add a check like `'source' in value && (value as { source?: unknown }).source === 'lazyToken'` so the guard fully narrows to LazyRelationModelName<string>.
345-351: UseifDefined()for conditional typeParams spread.♻️ Suggested refactor
return new ScalarFieldBuilder({ kind: 'scalar', descriptor: { ...spec.type, - ...(spec.typeParams ? { typeParams: spec.typeParams } : {}), + ...ifDefined('typeParams', spec.typeParams), }, nullable: false, executionDefault: spec.generated, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts` around lines 345 - 351, Replace the inline conditional spread for type parameters in the descriptor with the shared ifDefined() helper: inside the object passed to whatever constructs the descriptor (the block that currently uses "...(spec.typeParams ? { typeParams: spec.typeParams } : {})"), call ifDefined(spec.typeParams, tp => ({ typeParams: tp })) so the descriptor becomes "{ ...spec.type, ...ifDefined(spec.typeParams, tp => ({ typeParams: tp })) }"; update any imports to include ifDefined if not already present and ensure this change is applied where descriptor is assembled in staged-contract-dsl.ts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts`:
- Around line 107-125: The code directly indexes storageTypes[column.typeRef]
(e.g., in resolveColumnTypeMetadata) without guarding against prototype
properties; fix by checking Object.hasOwn(storageTypes, column.typeRef) before
accessing storageTypes[column.typeRef] and treat non-own keys as "not found"
(return the original column or existing fallback), and apply the same pattern in
the corresponding function in planner-ddl-builders.ts and the ternary in
planner-identity-values.ts so the resolution only uses own properties.
---
Outside diff comments:
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts`:
- Around line 93-125: expandParameterizedTypeSql currently allows a silent
fallback when a codec has planTypeOperations, which drops declared typeParams
for inline columns; update expandParameterizedTypeSql to accept the column's
typeRef (e.g., change its Pick to include 'typeRef') and only return null for
the planTypeOperations fallback when column.typeRef is truthy (i.e., it's a
named/typeRef column); otherwise, if typeParams exist and hooks.expandNativeType
is missing, throw an error to avoid emitting bare nativeType. Apply the same
guarding change to the mirrored buildExpectedFormatType path so both paths
enforce that missing expandNativeType is only tolerated for typeRef columns.
---
Nitpick comments:
In `@packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.ts`:
- Around line 317-322: Replace the inline conditional spreads in the
ScalarFieldBuilder construction with the ifDefined() helper: instead of using
...(spec.column ? { columnName: spec.column } : {}) and similar for idSpec and
uniqueSpec, wrap each optional object in ifDefined(...) so the final object
merges this.state and ifDefined({ columnName: spec.column }), ifDefined(idSpec ?
{ id: { name: idSpec.name } } : undefined) and ifDefined(uniqueSpec ? { unique:
{ name: uniqueSpec.name } } : undefined) and cast to ApplyFieldSqlSpec; update
the ScalarFieldBuilder({...}) call to use these ifDefined-wrapped values.
- Around line 1100-1109: The type guard isMissing the `source` check for
LazyRelationModelName: update the isLazyRelationModelName function to also
verify that value has a `source` property equal to the expected literal (e.g.,
'lazyToken'); specifically, in the function that currently checks `kind ===
'lazyRelationModelName'` and `resolve` (isLazyRelationModelName), add a check
like `'source' in value && (value as { source?: unknown }).source ===
'lazyToken'` so the guard fully narrows to LazyRelationModelName<string>.
- Around line 345-351: Replace the inline conditional spread for type parameters
in the descriptor with the shared ifDefined() helper: inside the object passed
to whatever constructs the descriptor (the block that currently uses
"...(spec.typeParams ? { typeParams: spec.typeParams } : {})"), call
ifDefined(spec.typeParams, tp => ({ typeParams: tp })) so the descriptor becomes
"{ ...spec.type, ...ifDefined(spec.typeParams, tp => ({ typeParams: tp })) }";
update any imports to include ifDefined if not already present and ensure this
change is applied where descriptor is assembled in staged-contract-dsl.ts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 3d1504a8-25b6-40c5-92ae-2387088dd39b
📒 Files selected for processing (8)
examples/prisma-next-demo/src/prisma-no-emit/context.tspackages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-recipes.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner.tstest/integration/test/staged-dsl-type-inference.test-d.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/3-targets/3-targets/postgres/src/core/migrations/planner-recipes.ts
- packages/3-targets/3-targets/postgres/src/core/migrations/planner-reconciliation.ts
packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts
Outdated
Show resolved
Hide resolved
|
Re: Object.hasOwn() guard — Done. Applied |
c0a3fb2 to
ddaa4f0
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts`:
- Around line 73-103: The function buildColumnTypeSql currently treats named
typeRefs as pseudo-types for autoincrement; add a boolean parameter (e.g.,
allowPseudoTypes = false) and only apply the autoincrement ->
SERIAL/BIGSERIAL/SMALLSERIAL mapping when allowPseudoTypes is true (or when
column.typeRef is absent), leaving the existing behavior for canonical rendering
otherwise; update callers that render CREATE/ADD column definitions to pass true
and callers used for ALTER/verification (the planner-reconciliation and
planner-sql-checks call sites) to pass false so verification uses canonical
integer/bigint/smallint.
- Around line 121-125: The current check in planner-ddl-builders.ts that returns
null when hooks?.expandNativeType is missing causes declared typeParams to be
dropped; instead, when codecHooks.get(column.codecId) exists and has
planTypeOperations but no expandNativeType and the column has typeParams, do not
return null — fail fast: throw a descriptive Error (or otherwise propagate a
failure) including the codecId and the typeParams so buildColumnTypeSql does not
silently fall back to bare nativeType; update the logic around codecHooks,
expandNativeType, planTypeOperations, and column.typeParams to
preserve/propagate the missing-inline-renderer error rather than returning null.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: f09ff712-862b-4d7f-b0c1-f44d2fdcdea8
📒 Files selected for processing (3)
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-identity-values.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/3-targets/3-targets/postgres/src/core/migrations/planner-identity-values.ts
- packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts
Show resolved
Hide resolved
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts
Show resolved
Hide resolved
|
Re: SERIAL pseudo-type thread — Not addressing in this PR. The SERIAL pseudo-type in Re: planTypeOperations null return thread — Not addressing in this PR. The |
buildColumnTypeSql now takes an allowPseudoTypes parameter (default true). The reconciliation module passes false when building ALTER COLUMN TYPE statements, since SERIAL/BIGSERIAL/SMALLSERIAL are only valid in CREATE TABLE and ADD COLUMN contexts. Addresses CodeRabbit review feedback about pseudo-type rendering drift between planning and verification.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts (2)
121-129:⚠️ Potential issue | 🟠 MajorMissing
expandNativeType()still needs to fail hard whentypeParamsexist.Returning
nullhere makesbuildColumnTypeSql()fall back to the bare type name and silently drop declared parameters for codecs that only expose planning hooks. That emits incorrect DDL instead of surfacing the missing inline renderer.🛠️ Minimal fix
const hooks = codecHooks.get(column.codecId); if (!hooks?.expandNativeType) { - if (hooks?.planTypeOperations) { - return null; - } throw new Error( `Column declares typeParams for nativeType "${column.nativeType}" ` + `but no expandNativeType hook is registered for codecId "${column.codecId}". ` + 'Ensure the extension providing this codec is included in extensionPacks.',As per coding guidelines, "Do not treat missing type parameters as a successful match when the contract declares them."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts` around lines 121 - 129, The current branch in the codec hook lookup silently returns null when expandNativeType is missing but planTypeOperations exists, causing buildColumnTypeSql to drop declared typeParams; change the logic in the codecHooks handling (around codecHooks.get(column.codecId) / expandNativeType / planTypeOperations) to throw an Error when column.typeParams (or any declared type parameters on column) are present and no expandNativeType is registered for that codecId/nativeType, rather than returning null—leave the existing null return only for cases where no typeParams are declared; ensure the thrown error message references column.nativeType and column.codecId so callers can act on the missing inline renderer.
24-24:⚠️ Potential issue | 🟠 MajorKeep
SERIAL/BIGSERIALrendering out of the canonical type path.After
resolveColumnTypeMetadata(), namedint2/int4/int8refs are always rewritten to pseudo-types here.buildExpectedFormatType()inplanner-sql-checks.tsstill formats the resolved type, neverSERIAL, so reconciliation can drift for autoincrement named types. Gate this branch behind anallowPseudoTypesflag and enable it only from the CREATE/ADD-column callers.🛠️ Minimal fix
export function buildColumnTypeSql( column: StorageColumn, codecHooks: Map<string, CodecControlHooks>, storageTypes: Record<string, StorageTypeInstance> = {}, + allowPseudoTypes = false, ): string { const resolved = resolveColumnTypeMetadata(column, storageTypes); const columnDefault = column.default; - if (columnDefault?.kind === 'function' && columnDefault.expression === 'autoincrement()') { + if ( + allowPseudoTypes && + columnDefault?.kind === 'function' && + columnDefault.expression === 'autoincrement()' + ) {Then pass
trueonly frombuildCreateTableSql()andbuildAddColumnSql().Also applies to: 81-90, 227-227
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts` at line 24, The code currently rewrites named int types (int2/int4/int8) to pseudo-types like SERIAL/BIGSERIAL during resolveColumnTypeMetadata() unconditionally, causing buildExpectedFormatType() to see different canonical types and drift for autoincrement columns; add an allowPseudoTypes boolean parameter to the path in planner-ddl-builders.ts that gates this pseudo-type rewrite and thread it through resolveColumnTypeMetadata() so callers can opt-in, then call with allowPseudoTypes=true only from buildCreateTableSql() and buildAddColumnSql(), leaving other callers (and buildExpectedFormatType()) using the canonical types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts`:
- Around line 98-103: The code is incorrectly quoting resolved native SQL types:
when column.typeRef is truthy the function returns
quoteIdentifier(resolved.nativeType) which turns type syntax like "uuid[]" or
"double precision" into identifiers; instead, return the raw resolved.nativeType
after asserting it's safe. Update the branch that currently calls
quoteIdentifier to call assertSafeNativeType(resolved.nativeType) (or reuse the
existing assertSafeNativeType) and return resolved.nativeType unquoted; keep
quoteIdentifier only for actual identifiers, not for values coming from
resolved.nativeType (references: column.typeRef, resolved.nativeType,
quoteIdentifier, assertSafeNativeType).
---
Duplicate comments:
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts`:
- Around line 121-129: The current branch in the codec hook lookup silently
returns null when expandNativeType is missing but planTypeOperations exists,
causing buildColumnTypeSql to drop declared typeParams; change the logic in the
codecHooks handling (around codecHooks.get(column.codecId) / expandNativeType /
planTypeOperations) to throw an Error when column.typeParams (or any declared
type parameters on column) are present and no expandNativeType is registered for
that codecId/nativeType, rather than returning null—leave the existing null
return only for cases where no typeParams are declared; ensure the thrown error
message references column.nativeType and column.codecId so callers can act on
the missing inline renderer.
- Line 24: The code currently rewrites named int types (int2/int4/int8) to
pseudo-types like SERIAL/BIGSERIAL during resolveColumnTypeMetadata()
unconditionally, causing buildExpectedFormatType() to see different canonical
types and drift for autoincrement columns; add an allowPseudoTypes boolean
parameter to the path in planner-ddl-builders.ts that gates this pseudo-type
rewrite and thread it through resolveColumnTypeMetadata() so callers can opt-in,
then call with allowPseudoTypes=true only from buildCreateTableSql() and
buildAddColumnSql(), leaving other callers (and buildExpectedFormatType()) using
the canonical types.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 2ae0050a-f9c4-45de-8154-3c158ce10287
📒 Files selected for processing (3)
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-identity-values.tspackages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/3-targets/3-targets/postgres/src/core/migrations/planner-sql-checks.ts
| if (column.typeRef) { | ||
| return quoteIdentifier(column.nativeType); | ||
| return quoteIdentifier(resolved.nativeType); | ||
| } | ||
|
|
||
| assertSafeNativeType(column.nativeType); | ||
| return renderParameterizedTypeSql(column, codecHooks) ?? column.nativeType; | ||
| assertSafeNativeType(resolved.nativeType); | ||
| return resolved.nativeType; |
There was a problem hiding this comment.
Don't quote resolved native-type syntax.
Once typeRef has been resolved, resolved.nativeType can already be SQL type syntax like uuid[] or double precision. quoteIdentifier() turns those into "uuid[]" / "double precision", which PostgreSQL treats as identifiers, not the intended type expression. This can emit invalid DDL for otherwise valid named storage types.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts`
around lines 98 - 103, The code is incorrectly quoting resolved native SQL
types: when column.typeRef is truthy the function returns
quoteIdentifier(resolved.nativeType) which turns type syntax like "uuid[]" or
"double precision" into identifiers; instead, return the raw resolved.nativeType
after asserting it's safe. Update the branch that currently calls
quoteIdentifier to call assertSafeNativeType(resolved.nativeType) (or reuse the
existing assertSafeNativeType) and return resolved.nativeType unquoted; keep
quoteIdentifier only for actual identifiers, not for values coming from
resolved.nativeType (references: column.typeRef, resolved.nativeType,
quoteIdentifier, assertSafeNativeType).
ShouldInsertSnakeUnderscore now treats PrevKind "other" (digits, underscore) the same as "lower" when deciding to insert an underscore before an uppercase letter. This matches the runtime applyNaming which treats digits as lowercase characters, preventing naming drift for names like "user1ID" (runtime: user1_id, previously type: user1id).
Consolidates the duplicated resolveColumnTypeMetadata function from planner-ddl-builders.ts and planner-sql-checks.ts into a shared planner-type-resolution.ts module. The duplication was originally introduced to avoid a circular import, but extracting into a third module eliminates both the duplication and the circular dependency risk.
Replace Object.hasOwn guard with direct index access and undefined check, which TypeScript can narrow correctly under noUncheckedIndexedAccess.
Summary
This PR redesigns the SQL TypeScript contract authoring surface around a staged DSL with a shared semantic core and a minimal SQL overlay.
I moved TS authoring from separate fluent table-and-model chains — with repeated storage choreography, stringly cross-model refs, and hard-coded helper vocabulary — to:
defineContract(...)shellmodel('User', ...).attributes(...).sql(...)overlay for table names, FK/index names, and target-specific detailI did not set out to invent a new IR, change runtime validation semantics, or change PSL syntax. I set out to make
contract.tsread more like the model graph authors mean — while still lowering to the same explicit contract model, still emitting the samecontract.jsonandcontract.d.ts.Design Principles and Where They Show Up
Most of the branch only makes sense in light of five design principles. I merged the motivation with the implementation trace to keep things compact.
1. Semantic-first authoring
Authors should describe the model graph first; SQL detail is a fallback. The public surface reflects this staging:
fields→relations→.attributes(...)→.sql(...). I pushed compound identity and uniqueness into the semantic layer and kept.sql(...)intentionally small.2. Shared composition owns the vocabulary
TS should not hard-code a second copy of the target and extension authoring surface. I moved built-in field presets into shared portable SQL authoring descriptors and derived the staged
field.*surface from pack-provided descriptors, while PSL consumes the same family and extension composition. The DSL changes because composition changes, not because TS maintains a parallel vocabulary.3. TS and PSL must meet at the same semantic contract
The TS authoring surface can change freely, but equivalent TS and PSL contracts must normalize to the same IR. I treated this as a proof obligation: the branch adds a real parity fixture in
packages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.ts, and I updated PSL interpretation so both surfaces preserve the same semantic information when lowering.4. No-emit stays first-class and typed
The TypeScript surface has to work well when the contract itself is the source of truth — no generated runtime code. I added typed model tokens, typed cross-model refs (
User.refs.id,User.ref('id')), and staged relation wiring that avoids forcing whole-contract graph generics.5. Fallbacks exist but are visibly second-best
String-based refs still work. I kept fallback APIs, added diagnostics around them, and made typed tokens the primary path.
The Four Connected Changes
This PR looks like one API redesign, but it is really four connected pieces — plus hardening.
New staged TS authoring surface. The main user-facing change lives in
packages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.tsand the public entrypointpackages/2-sql/2-authoring/contract-ts/src/contract-builder.ts. Authors define model tokens once, then progressively attach scalar fields, relations, semantic attributes, and SQL-only overlays. This is the most visible change, but it is only credible because the branch also does the next three.Shared preset and helper composition. The old TS surface duplicated too much framework-local helper vocabulary. I moved built-in field presets into
packages/2-sql/1-core/contract/src/authoring.ts, reused them from the Postgres pack inpackages/3-targets/3-targets/postgres/src/core/authoring.ts, and routed staged helper composition throughpackages/2-sql/2-authoring/contract-ts/src/authoring-helper-runtime.tsandpackages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts. The staged surface is now composition-driven rather than hand-maintained.Extracted lowering pipeline. The staged lowering path is no longer buried in one large builder file. I factored it into an explicit pipeline in
packages/2-sql/2-authoring/contract-ts/src/staged-contract-lowering.ts, with separate phases for spec collection, fallback diagnostics, and semantic lowering. This is an internal refactor, but it is what makes the new surface reviewable instead of magical.PSL alignment and proof. I did not change PSL syntax. I updated PSL interpretation in
packages/2-sql/2-authoring/contract-psl/src/interpreter.tsso PSL preserves the same semantic information the staged TS surface lowers, including composed pack metadata and named constraints. The proof lives inpackages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.ts.Hardening. Once the new surface existed, I closed gaps that showed up around it: duplicate table/column naming after naming transforms now fails fast; helper namespace merging rejects prototype-polluting keys; authoring template resolution is stricter; pgvector named-type parity is normalized around named type refs; repetitive staged DSL tests were compacted with shared builders and
it.each(...).Before / After
The easiest way to review this PR is to look at the API shift from a few angles.
Angle 1: whole-contract authoring shape
Before — TS authoring split storage and model concerns across separate fluent chains:
After — the contract shell is object-literal based, model tokens are defined once, and relations/attributes/SQL details layer on top:
I wanted the contract shell to read like a declarative model graph, not like a pair of parallel builder programs.
Angle 2: cross-model refs
Before — cross-model foreign-key authoring was stringly typed:
After — refs come from the typed model token that owns them:
A small API change, but a big design shift: refs come from typed model tokens, not from a global string registry.
Angle 3: semantic constraints vs. SQL-only detail
Before — compound semantics and SQL-only detail lived in the same low-level chain:
After — compound identity and uniqueness live in
.attributes(...), target-specific index detail stays in.sql(...):This layering rule is one of the central ideas: semantic constraints are model intent; target-specific search and index metadata is not.
Angle 4: TS↔PSL symmetry
I treated TS↔PSL symmetry as a proof obligation, not a slogan. The branch adds a real parity fixture — one representative contract authorable in both surfaces, lowering identically, staying within a terseness threshold.
Suggested Review Order
This PR is easier to review in semantic order than in file order.
1. Public authoring shape
projects/ts-contract-authoring-redesign/contract.before.tsprojects/ts-contract-authoring-redesign/contract.after.tspackages/2-sql/2-authoring/contract-ts/src/staged-contract-dsl.tsAnswers: what is the new authoring model?
2. Lowering path
packages/2-sql/2-authoring/contract-ts/src/staged-contract-lowering.tspackages/2-sql/2-authoring/contract-ts/src/semantic-contract.tsAnswers: how does the staged surface become the canonical contract?
3. Shared composition
packages/2-sql/1-core/contract/src/authoring.tspackages/2-sql/2-authoring/contract-ts/src/authoring-helper-runtime.tspackages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.tspackages/3-targets/3-targets/postgres/src/core/authoring.tsAnswers: is the vocabulary actually pack-driven now, or just renamed?
4. PSL parity and proof
packages/2-sql/2-authoring/contract-psl/src/interpreter.tspackages/2-sql/2-authoring/contract-psl/test/ts-psl-parity.test.tspackages/2-sql/2-authoring/contract-ts/test/contract-builder.staged-contract-dsl.parity.test.tsAnswers: does the new TS surface meet the symmetry and terseness goals without changing PSL syntax?
Compatibility / Non-goals
What Reviewers Should Focus On
If those hold, most of the rest is extraction, proof, and hardening in support of that design.
Summary by CodeRabbit
New Features
Improvements