UN-2022 [FEAT] Add co-owner management for Adapters, API Deployments, Connectors, Pipelines, Workflows, Prompt Studio#1797
Conversation
WalkthroughAdds co-ownership across the app: new Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…co-owners-for-adapters-workflows-API-deployments
…co-owners-for-adapters-workflows-API-deployments
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
backend/adapter_processor_v2/views.py (1)
189-248:⚠️ Potential issue | 🟡 Minor
co_owners.add()is inside theIntegrityErrorcatch block — potential misattribution.Lines 205–206 sit inside the
tryblock that catchesIntegrityErrorand re-raises it asDuplicateAdapterNameError. While Django's M2M.add()typically handles duplicate entries gracefully, any unexpectedIntegrityErrorfrom theco_ownersinsert would surface as "duplicate adapter name" rather than the true cause, making debugging harder.Move the
co_owners.add()call after theexceptblock so only theserializer.save()and default-adapter logic are covered by that handler:🛠️ Proposed fix
try: ... instance = serializer.save() - if instance.created_by: - instance.co_owners.add(instance.created_by) organization_member = OrganizationMemberService.get_user_by_id(...) ... user_default_adapter.save() except IntegrityError: raise DuplicateAdapterNameError( name=serializer.validated_data.get(AdapterKeys.ADAPTER_NAME) ) + if instance.created_by: + instance.co_owners.add(instance.created_by) headers = self.get_success_headers(serializer.data) return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/adapter_processor_v2/views.py` around lines 189 - 248, The try block currently covers serializer.save(), default-adapter logic and the instance.co_owners.add(...) call so any IntegrityError from the M2M add will be misreported as DuplicateAdapterNameError; refactor by keeping serializer.save(), UserDefaultAdapter.get_or_create and the default-assignment logic inside the try/except that catches IntegrityError and raises DuplicateAdapterNameError, but move the instance.co_owners.add(instance.created_by) call (and its created_by check) to execute after the except block so M2M errors are not swallowed by the duplicate-name handler; ensure you still fetch organization_member via OrganizationMemberService.get_user_by_id(request.user.id) before adding co-owner and before saving user_default_adapter as needed.backend/api_v2/api_deployment_views.py (1)
265-268:⚠️ Potential issue | 🟠 MajorCo-owners are excluded from
by_prompt_studio_toolresults.The queryset filter
created_by=request.useronly returns deployments where the requesting user is the original creator. A user who is a co-owner but not thecreated_bywill receive an empty list even though they have full access to those deployments. This breaks the equal-privilege guarantee stated in the PR.🐛 Proposed fix
- deployments = APIDeployment.objects.filter( - workflow_id__in=workflow_ids, created_by=request.user - ) + deployments = APIDeployment.objects.for_user(request.user).filter( + workflow_id__in=workflow_ids + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/api_v2/api_deployment_views.py` around lines 265 - 268, The current filter on APIDeployment only checks created_by=request.user and thus excludes co-owners; update the deployments queryset (the APIDeployment.objects.filter(...) that assigns deployments) to include deployments where the user is either the creator or a co-owner/collaborator by using a Q expression (e.g., filter(workflow_id__in=workflow_ids).filter(Q(created_by=request.user) | Q(<co_owner_relation>=request.user))). Import Q from django.db.models and replace <co_owner_relation> with the actual relation name on APIDeployment (e.g., owners, collaborators, co_owners) so co-owners are included in the results.backend/prompt_studio/prompt_studio_core_v2/serializers.py (2)
123-126:⚠️ Potential issue | 🟠 MajorCo-owner fields are skipped when a tool has zero prompts.
The early return path exits before
add_co_owner_fields, so ownership metadata is missing for empty tools.🔧 Proposed fix
def to_representation(self, instance): # type: ignore data = super().to_representation(instance) + request = self.context.get("request") + self.add_co_owner_fields(instance, data, request) default_profile = None @@ if not prompt_instances.exists(): data[TSKeys.PROMPTS] = [] return data @@ - # Co-owner information - request = self.context.get("request") - self.add_co_owner_fields(instance, data, request) - return dataAlso applies to: 159-161
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/prompt_studio/prompt_studio_core_v2/serializers.py` around lines 123 - 126, The early-return when prompt_instances.exists() is False skips calling add_co_owner_fields, so co-owner metadata is missing for tools with zero prompts; update the serializer to invoke add_co_owner_fields (the function that injects ownership metadata) before returning the empty prompts path or explicitly call add_co_owner_fields inside that branch, ensuring data[TSKeys.PROMPTS] is still set to [] and then add_co_owner_fields(data, ...) is executed for the same prompt_instances/context; apply the same change at the second occurrence around the other branch (the spots corresponding to the existing checks at the two locations).
180-190:⚠️ Potential issue | 🟠 MajorAdd
UserSerializeror use a separate field method for co-owner/created-by responses.The
UserSerializer(backend/account_v2/serializer.py) only exposesidandusername, but the frontend's co-owner management UI explicitly expects and uses theThe proposed fix using
SerializerMethodFieldto includeUserSerializer.Meta.fieldstuple if that aligns with security/exposure policies for other endpoints using this serializer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/prompt_studio/prompt_studio_core_v2/serializers.py` around lines 180 - 190, The UserSerializer used by CustomTool serializer does not include users' email, breaking the frontend co-owner UI; update backend/prompt_studio/prompt_studio_core_v2/serializers.py by exposing email for co_owners/created_by/shared_users: either add "email" to the existing UserSerializer.Meta.fields in backend/account_v2/serializer.py or, if you prefer not to change the global serializer, add a SerializerMethodField or nested field on CustomToolSerializer (referencing co_owners, shared_users, created_by) that returns each user's email; ensure the field name matches what the frontend expects ("email") and return None or empty string when email is missing.
🟡 Minor comments (6)
frontend/src/components/deployments/create-api-deployment-modal/CreateApiDeploymentModal.jsx-178-188 (1)
178-188:⚠️ Potential issue | 🟡 MinorPreserve the 404-specific alert message during list refresh.
At Line 181,
updateTableData()can later fail and overwrite the 404 alert set at Line 182 with the generic"Error fetching API deployments"message fromupdateTableData’s catch. That hides the real cause for users.Suggested fix
- const updateTableData = () => { + const updateTableData = ({ suppressErrorAlert = false } = {}) => { apiDeploymentsApiService .getApiDeploymentsList() .then((res) => { setTableData(res?.data); }) .catch((err) => { - setAlertDetails({ - type: "error", - content: "Error fetching API deployments", - }); + if (!suppressErrorAlert) { + setAlertDetails({ + type: "error", + content: "Error fetching API deployments", + }); + } }); };- updateTableData(); + updateTableData({ suppressErrorAlert: true }); setAlertDetails({ type: "error", content: "This resource is no longer accessible. It may have been removed or your access has been revoked.", });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/deployments/create-api-deployment-modal/CreateApiDeploymentModal.jsx` around lines 178 - 188, The 404-specific alert set via setAlertDetails is being overwritten by updateTableData's error handling; to fix, await updateTableData() (or call it and handle its promise) before setting the 404 alert so the final alert remains the 404 message. Specifically, in the error branch where you call setOpen(false), clearFormDetails(), and updateTableData(), change the order to await updateTableData() inside a try/catch (or call updateTableData().catch(()=>{})) and then call setAlertDetails(...) with the 404 message so updateTableData cannot overwrite it; alternatively add a suppressErrorAlert flag to updateTableData and pass it from this branch to prevent updateTableData from setting its own alert. Ensure you reference setAlertDetails and updateTableData when making the change.backend/pipeline_v2/serializers/crud.py-203-207 (1)
203-207:⚠️ Potential issue | 🟡 MinorMake primary co-owner selection deterministic.
Using
.first()on an unordered M2M relation can produce unstablecreated_by_emailvalues between requests.💡 Proposed fix
def get_created_by_email(self, obj): """Get the email of the primary owner (first co-owner).""" - first_co_owner = obj.co_owners.first() + first_co_owner = obj.co_owners.order_by("id").first() if first_co_owner: return first_co_owner.email return obj.created_by.email if obj.created_by else None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pipeline_v2/serializers/crud.py` around lines 203 - 207, The current use of obj.co_owners.first() is nondeterministic because M2M relations are unordered; make selection deterministic by explicitly ordering the relation before taking the first item (e.g., replace obj.co_owners.first() with obj.co_owners.order_by('pk').first() or order_by('id') / order_by('created_at') if your CoOwner model exposes a creation timestamp) so the primary owner email returned by the method is stable across requests; update the method that computes the primary owner email accordingly (keep the existing fallback to obj.created_by.email).backend/permissions/co_owner_serializers.py-21-27 (1)
21-27:⚠️ Potential issue | 🟡 Minor
created_by_emailcan be unstable because it depends on unorderedco_owners.first().Using the first row from an unordered M2M can change who is shown as owner between requests. Make this deterministic.
💡 Suggested fix
- first_co_owner = instance.co_owners.first() + first_co_owner = instance.co_owners.order_by("id").first() representation["created_by_email"] = ( first_co_owner.email if first_co_owner else (instance.created_by.email if instance.created_by else None) )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 21 - 27, The current use of instance.co_owners.first() is nondeterministic; change it to select a deterministic co-owner (e.g., earliest by primary key or creation timestamp) such as instance.co_owners.order_by('id').first() or order_by('created_at').first() so created_by_email is stable across requests, then keep the fallback to instance.created_by.email if no co-owner exists and leave co_owners_count as instance.co_owners.count().frontend/src/components/widgets/share-permission/SharePermission.jsx-40-49 (1)
40-49:⚠️ Potential issue | 🟡 MinorCreator filtering uses mixed ID types, so owner exclusion can fail.
user?.idis compared to string-converted creator IDs in some branches. Normalize both sides before comparing.💡 Suggested fix
const users = allUsers.filter((user) => { + const userId = user?.id?.toString(); + const creatorId = + sharedItem?.created_by?.id?.toString() ?? + sharedItem?.created_by?.toString(); + if (sharedItem?.created_by?.id !== undefined) { return isSharableToOrg - ? !selectedUsers.includes(user?.id?.toString()) - : user?.id !== sharedItem?.created_by?.id?.toString() && - !selectedUsers.includes(user?.id?.toString()); + ? !selectedUsers.includes(userId) + : userId !== creatorId && !selectedUsers.includes(userId); } else { return isSharableToOrg - ? !selectedUsers.includes(user?.id?.toString()) - : user?.id !== sharedItem?.created_by?.toString() && - !selectedUsers.includes(user?.id?.toString()); + ? !selectedUsers.includes(userId) + : userId !== creatorId && !selectedUsers.includes(userId); } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/widgets/share-permission/SharePermission.jsx` around lines 40 - 49, The creator-owner comparison in the SharePermission render branches mixes number and string types (user?.id vs sharedItem?.created_by?.id / created_by) causing incorrect owner exclusion; normalize IDs to the same type before comparing and when checking selectedUsers. Update the two conditional branches around sharedItem?.created_by to coerce both sides using a single strategy (e.g., String(user?.id) === String(sharedItem?.created_by?.id) and selectedUsers.includes(String(user?.id))) so all comparisons (user?.id vs created_by and selectedUsers.includes) consistently use the same string coercion; apply this to both the created_by.id and created_by branches in the code.frontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsx-66-75 (1)
66-75:⚠️ Potential issue | 🟡 MinorGuard/disable Apply when there are no pending additions.
Line [66]-[75] and Line [101]-[103] allow submitting an empty add payload.
🛠️ Suggested fix
const handleApply = async () => { const usersToAdd = [...pendingAdds]; + if (!usersToAdd.length) return; setApplying(true); try { const userIds = usersToAdd.map((user) => user.id); await onAddCoOwner(resourceId, userIds); } finally { setPendingAdds([]); setApplying(false); } }; ... <Modal ... onOk={handleApply} okText={"Apply"} confirmLoading={applying} + okButtonProps={{ disabled: loading || applying || pendingAdds.length === 0 }} ... >Also applies to: 101-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsx` around lines 66 - 75, handleApply currently proceeds when pendingAdds is empty and calls onAddCoOwner with an empty payload; add a guard at the start of handleApply that returns early when pendingAdds.length === 0 (or when usersToAdd.length === 0) so it never sets applying or calls onAddCoOwner for an empty list, and mirror the same guard/early-return in the other add handler that also uses pendingAdds and onAddCoOwner; additionally ensure the Apply button is disabled in the render when pendingAdds.length === 0 to prevent user submission.frontend/src/pages/ConnectorsPage.jsx-146-152 (1)
146-152:⚠️ Potential issue | 🟡 MinorKeep share user list filtering consistent (exclude current user).
Line [146]-[152] reintroduces the session user into
userList, unlikefetchUsers.🛠️ Suggested fix
- const users = - usersResponse?.data?.members?.map((member) => ({ - id: member.id, - email: member.email, - })) || []; + const users = + (usersResponse?.data?.members || []) + .filter((member) => member.id !== sessionDetails?.id) + .map((member) => ({ + id: member.id, + email: member.email, + })) || [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/ConnectorsPage.jsx` around lines 146 - 152, The users mapping is reintroducing the current session user into userList; update the code that builds users (the usersResponse → users mapping used before setUserList) to filter out the current session user ID (compare against session?.user?.id or the same currentUserId used in fetchUsers) so the resulting array excludes the session user before calling setUserList; you can apply .filter(member => member.id !== session?.user?.id) either before mapping or after mapping to ensure consistency with fetchUsers.
🧹 Nitpick comments (11)
frontend/src/components/custom-tools/add-custom-tool-form-modal/AddCustomToolFormModal.jsx (1)
80-88: Consider tailoring the 404 error message to theisEditcontext.The current message — "This resource is no longer accessible…" — is accurate for the edit path (the tool being edited was deleted or access was revoked). For the create path, a 404 more likely means a parent resource (e.g., the project/workspace) has been removed, so "this resource" is ambiguous from the user's perspective.
A small guard would make the UX copy more precise:
✨ Proposed improvement
- if (err?.response?.status === 404) { - setOpen(false); - setAlertDetails({ - type: "error", - content: - "This resource is no longer accessible. It may have been removed or your access has been revoked.", - }); - return; - } + if (err?.response?.status === 404) { + setOpen(false); + setAlertDetails({ + type: "error", + content: isEdit + ? "This resource is no longer accessible. It may have been removed or your access has been revoked." + : "Unable to create the project. The target resource may have been removed or your access has been revoked.", + }); + return; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/custom-tools/add-custom-tool-form-modal/AddCustomToolFormModal.jsx` around lines 80 - 88, Update the 404 error handling in AddCustomToolFormModal.jsx to conditionally set a more precise alert based on the isEdit flag: when err?.response?.status === 404 keep setOpen(false) but call setAlertDetails with the existing "This resource is no longer accessible..." message for the edit path (isEdit === true), and for the create path (isEdit === false) replace the copy with a message that indicates a parent resource (project/workspace) may have been removed or access revoked; preserve the same alert shape and severity and reuse the same setAlertDetails/setOpen flow so behavior is unchanged aside from the copy.frontend/src/components/workflows/workflow/workflow-service.js (1)
148-169: Consider using localconstper-method instead of the shared module-leveloptions.The module-level
let options = {}(line 4) is mutated (fully reassigned) by every method, including the two new ones. While JavaScript's single-threaded event loop prevents a true data race today, this pattern is fragile: if any future change introducesawaitbetween assignment and theaxiosPrivatecall, or if the scheduler queues two service calls in the same tick, the shared reference would silently reflect the last assignment. Using a localconstper call site is the idiomatic, safer approach.♻️ Proposed refactor (shown for new methods; same pattern applies to all methods in the file)
- addCoOwner: (id, userId) => { - options = { - url: `${path}/workflow/${id}/owners/`, - method: "POST", - headers: { - "X-CSRFToken": csrfToken, - "Content-Type": "application/json", - }, - data: { user_id: userId }, - }; - return axiosPrivate(options); - }, - removeCoOwner: (id, userId) => { - options = { - url: `${path}/workflow/${id}/owners/${userId}/`, - method: "DELETE", - headers: { - "X-CSRFToken": csrfToken, - }, - }; - return axiosPrivate(options); - }, + addCoOwner: (id, userId) => { + const options = { + url: `${path}/workflow/${id}/owners/`, + method: "POST", + headers: { + "X-CSRFToken": csrfToken, + "Content-Type": "application/json", + }, + data: { user_id: userId }, + }; + return axiosPrivate(options); + }, + removeCoOwner: (id, userId) => { + const options = { + url: `${path}/workflow/${id}/owners/${userId}/`, + method: "DELETE", + headers: { + "X-CSRFToken": csrfToken, + }, + }; + return axiosPrivate(options); + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/workflows/workflow/workflow-service.js` around lines 148 - 169, The module-level mutable options object is being reassigned by each method (including addCoOwner and removeCoOwner), which is fragile; change each method to create a local const options (e.g., inside addCoOwner and inside removeCoOwner) instead of using the shared module-level options variable so each call builds its own options object before calling axiosPrivate; update all other methods in this file the same way (replace usages of the module-level options with per-method local const options) to avoid accidental state sharing.backend/prompt_studio/prompt_studio_core_v2/views.py (1)
122-124: Avoid the extra fetch (and duplicate owner add) increate().Lines 122–124 re-query the just-created object via
serializer.data.serializer.instanceis already available, and creator→co_owner is now centralized inAuditSerializer.create.Proposed patch
- instance = CustomTool.objects.get(pk=serializer.data["tool_id"]) - if instance.created_by: - instance.co_owners.add(instance.created_by) + instance = serializer.instance🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/prompt_studio/prompt_studio_core_v2/views.py` around lines 122 - 124, In the create() path, stop re-fetching the object with CustomTool.objects.get(pk=serializer.data["tool_id"]) and instead use serializer.instance (the newly created CustomTool) to avoid the extra DB round-trip; also remove the redundant instance.co_owners.add(instance.created_by) since creator→co_owner handling is already centralized in AuditSerializer.create (so leave any co_owner logic to AuditSerializer.create and do not add the creator again in this view).backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.py (1)
4-8: Useschema_editor.connection.aliasin this RunPython migration.Line 4 accepts
schema_editorbut the query on Lines 6–8 does not use it; this can target the wrong DB in routed/multi-database environments.Proposed patch
def backfill_creator_to_co_owners(apps, schema_editor): + db_alias = schema_editor.connection.alias AdapterInstance = apps.get_model("adapter_processor_v2", "AdapterInstance") - for adapter in AdapterInstance.objects.filter(created_by__isnull=False): + for adapter in ( + AdapterInstance.objects.using(db_alias) + .filter(created_by__isnull=False) + .iterator() + ): if not adapter.co_owners.filter(id=adapter.created_by_id).exists(): adapter.co_owners.add(adapter.created_by)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.py` around lines 4 - 8, The migration backfill_creator_to_co_owners should target the DB alias from schema_editor: capture alias = schema_editor.connection.alias and use it on all queryset/manager operations so the migration runs against the routed DB; e.g., call AdapterInstance.objects.using(alias).filter(created_by__isnull=False) to iterate adapters and use adapter.co_owners.using(alias).filter(...).exists() and adapter.co_owners.using(alias).add(...) when adding the creator as a co_owner; keep the function name backfill_creator_to_co_owners and the AdapterInstance lookup via apps.get_model as-is.backend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.py (1)
4-8: Use the migration DB alias in the backfill query.On Line 4,
schema_editoris available but unused; this RunPython will always use the default DB connection. In multi-DB/routed setups, that can backfill the wrong database.Proposed patch
def backfill_creator_to_co_owners(apps, schema_editor): + db_alias = schema_editor.connection.alias Pipeline = apps.get_model("pipeline_v2", "Pipeline") - for pipeline in Pipeline.objects.filter(created_by__isnull=False): + for pipeline in ( + Pipeline.objects.using(db_alias) + .filter(created_by__isnull=False) + .iterator() + ): if not pipeline.co_owners.filter(id=pipeline.created_by_id).exists(): pipeline.co_owners.add(pipeline.created_by)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.py` around lines 4 - 8, The migration currently ignores schema_editor and always uses the default DB; fix backfill_creator_to_co_owners by getting the migration DB alias via alias = schema_editor.connection.alias, run the query with Pipeline.objects.using(alias).filter(created_by__isnull=False) and when adding the M2M use the same DB: pipeline.co_owners.using(alias).add(pipeline.created_by) so both the filter and the M2M add operate on the migration connection.frontend/src/components/workflows/workflow/Workflows.jsx (1)
375-384: Use null-safe defaults when passingcoOwnerDatainto the modal.This avoids potential render-time crashes if the modal opens before async co-owner data is fully populated.
💡 Proposed hardening
<CoOwnerManagement open={coOwnerOpen} setOpen={setCoOwnerOpen} resourceId={coOwnerResourceId} resourceType="Workflow" allUsers={coOwnerAllUsers} - coOwners={coOwnerData.coOwners} - createdBy={coOwnerData.createdBy} + coOwners={coOwnerData?.coOwners ?? []} + createdBy={coOwnerData?.createdBy ?? null} loading={coOwnerLoading} onAddCoOwner={onAddCoOwner} onRemoveCoOwner={onRemoveCoOwner} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/workflows/workflow/Workflows.jsx` around lines 375 - 384, coOwnerData may be undefined when the modal renders, causing crashes; update the props passed to CoOwnerManagement (the coOwners and createdBy props derived from coOwnerData) to use null-safe defaults such as using optional chaining and fallback values (e.g., coOwnerData?.coOwners ?? [] and coOwnerData?.createdBy ?? null) so the modal always receives stable values even while async data is loading.backend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.py (1)
4-8: Consider batching for large datasets.The per-row
filter().exists()+add()pattern issues 2 queries per workflow. For large tables this migration could be slow. You could batch usingprefetch_related('co_owners')or a raw SQLINSERT INTO ... SELECTon the M2M through table. This is a one-time migration though, so acceptable if the table is small.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.py` around lines 4 - 8, The migration backfill_creator_to_co_owners currently iterates workflows and does a per-row exists() + add() causing 2 queries per workflow; change it to batch the operation: fetch the through model via Workflow.co_owners.through, build a queryset or raw SQL that inserts missing (workflow_id, user_id) pairs in bulk (e.g., INSERT INTO through_table (workflow_id, user_id) SELECT id, created_by_id FROM workflow_v2_workflow WHERE created_by_id IS NOT NULL AND NOT EXISTS (...) ), or use Django to prefetch_related('co_owners') and accumulate missing relations then bulk_create ThroughModel instances; update the function backfill_creator_to_co_owners to perform the bulk insert (via schema_editor.connection.cursor().execute or bulk_create on ThroughModel) to avoid per-row queries and ensure deduplication by excluding existing relations before insert.backend/permissions/permission.py (1)
17-25: Co-owner ownership checks are duplicated across three permission classes.The
hasattr(obj, "co_owners") and obj.co_owners.filter(pk=request.user.pk).exists()pattern is repeated identically inIsOwner,IsOwnerOrSharedUser, andIsOwnerOrSharedUserOrSharedToOrg. Consider extracting a small helper (similar to_is_adapter_owner) to reduce duplication:♻️ Suggested helper extraction
+def _is_owner_or_co_owner(request: Request, obj: Any) -> bool: + """Check if user is creator or co-owner of a generic resource.""" + if obj.created_by == request.user: + return True + return ( + hasattr(obj, "co_owners") + and obj.co_owners.filter(pk=request.user.pk).exists() + ) + + class IsOwner(permissions.BasePermission): ... def has_object_permission(self, request, view, obj): - if obj.created_by == request.user: - return True - if ( - hasattr(obj, "co_owners") - and obj.co_owners.filter(pk=request.user.pk).exists() - ): - return True - return False + return _is_owner_or_co_owner(request, obj)(Apply similarly to the other two classes.)
Also applies to: 39-49, 57-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/permission.py` around lines 17 - 25, Extract the repeated co-owner check into a small helper function (e.g., _is_co_owner) and use it inside the has_object_permission implementations to remove duplication: implement _is_co_owner(obj, user) to return hasattr(obj, "co_owners") and obj.co_owners.filter(pk=user.pk).exists(), then replace the inline checks in IsOwner.has_object_permission, IsOwnerOrSharedUser.has_object_permission, and IsOwnerOrSharedUserOrSharedToOrg.has_object_permission with calls to _is_co_owner(obj, request.user). Ensure the helper is accessible to those classes (module-level or as a private function) and keep existing ownership logic intact.frontend/src/components/deployments/api-deployment/ApiDeployment.jsx (1)
71-84: Use hook-ownedcoOwnerResourceIdfor modal targeting consistency.The hook action already tracks the active co-owner resource; binding modal
resourceIdtoselectedRow?.idrisks stale/misaligned targeting in edge flows.💡 Suggested fix
const { coOwnerOpen, setCoOwnerOpen, coOwnerData, coOwnerLoading, coOwnerAllUsers, + coOwnerResourceId, handleCoOwner: handleCoOwnerAction, onAddCoOwner, onRemoveCoOwner, } = useCoOwnerManagement({ @@ <CoOwnerManagement open={coOwnerOpen} setOpen={setCoOwnerOpen} - resourceId={selectedRow?.id} + resourceId={coOwnerResourceId} resourceType="API Deployment"Also applies to: 600-604
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx` around lines 71 - 84, The modal resource is currently tied to selectedRow?.id which can get out of sync; instead use the hook-owned coOwnerResourceId provided by useCoOwnerManagement—update the destructuring to include coOwnerResourceId (if not already) and replace every modal's resourceId prop that references selectedRow?.id with coOwnerResourceId (also update the Add/Remove co-owner modal usages where handleCoOwnerAction/onAddCoOwner/onRemoveCoOwner are passed) to ensure modal targeting is driven by the hook.backend/api_v2/serializers.py (1)
426-430: Avoid duplicated ownership logic inAPIDeploymentListSerializer.This class inherits
CoOwnerRepresentationMixinbut re-implements the same ownership fields via serializer methods. Pick one source of truth to avoid drift.♻️ Minimal cleanup option
-from permissions.co_owner_serializers import CoOwnerRepresentationMixin @@ -class APIDeploymentListSerializer(CoOwnerRepresentationMixin, ModelSerializer): +class APIDeploymentListSerializer(ModelSerializer):Also applies to: 449-466
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/api_v2/serializers.py` around lines 426 - 430, APIDeploymentListSerializer duplicates ownership logic already provided by CoOwnerRepresentationMixin: remove the SerializerMethodField declarations (created_by_email, co_owners_count, is_owner) and their corresponding get_... methods from APIDeploymentListSerializer and rely on CoOwnerRepresentationMixin as the single source of truth; if the mixin does not yet expose those exact field names, move/rename the logic into CoOwnerRepresentationMixin (and ensure it declares those SerializerFields) so both this serializer and the other affected serializer block (the similar duplicate at lines ~449-466) reference the mixin instead of re-implementing ownership methods.frontend/src/pages/ConnectorsPage.jsx (1)
34-55: StabilizeconnectorCoOwnerServicewithuseMemoto avoid callback churn.The service object is recreated each render, causing all callbacks in
useCoOwnerManagementthat depend onserviceto also be recreated, leading to unnecessary re-renders of child components.♻️ Suggested refactor
-import { useState, useEffect, useCallback } from "react"; +import { useState, useEffect, useCallback, useMemo } from "react"; ... - const connectorCoOwnerService = { - getAllUsers: () => axiosPrivate.get(getUrl("users/")), - getSharedUsers: (id) => - axiosPrivate.get(getUrl(`connector/users/${id}/`), { - headers: { "X-CSRFToken": sessionDetails?.csrfToken }, - }), - addCoOwner: (id, userId) => - axiosPrivate.post( - getUrl(`connector/${id}/owners/`), - { user_id: userId }, - { - headers: { - "X-CSRFToken": sessionDetails?.csrfToken, - "Content-Type": "application/json", - }, - } - ), - removeCoOwner: (id, userId) => - axiosPrivate.delete(getUrl(`connector/${id}/owners/${userId}/`), { - headers: { "X-CSRFToken": sessionDetails?.csrfToken }, - }), - }; + const connectorCoOwnerService = useMemo( + () => ({ + getAllUsers: () => axiosPrivate.get(getUrl("users/")), + getSharedUsers: (id) => + axiosPrivate.get(getUrl(`connector/users/${id}/`), { + headers: { "X-CSRFToken": sessionDetails?.csrfToken }, + }), + addCoOwner: (id, userId) => + axiosPrivate.post( + getUrl(`connector/${id}/owners/`), + { user_id: userId }, + { + headers: { + "X-CSRFToken": sessionDetails?.csrfToken, + "Content-Type": "application/json", + }, + } + ), + removeCoOwner: (id, userId) => + axiosPrivate.delete(getUrl(`connector/${id}/owners/${userId}/`), { + headers: { "X-CSRFToken": sessionDetails?.csrfToken }, + }), + }), + [axiosPrivate, getUrl, sessionDetails?.csrfToken] + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/ConnectorsPage.jsx` around lines 34 - 55, The connectorCoOwnerService object is recreated on every render causing dependent callbacks (e.g., in useCoOwnerManagement) to churn; wrap the creation of connectorCoOwnerService in React.useMemo so the object reference is stable unless its dependencies (axiosPrivate, getUrl, sessionDetails?.csrfToken) change — update the module to export or use the memoized connectorCoOwnerService and ensure consumers like useCoOwnerManagement receive this stable instance.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (66)
backend/adapter_processor_v2/migrations/0004_adapterinstance_co_owners.pybackend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/adapter_processor_v2/models.pybackend/adapter_processor_v2/serializers.pybackend/adapter_processor_v2/urls.pybackend/adapter_processor_v2/views.pybackend/api_v2/api_deployment_views.pybackend/api_v2/migrations/0004_apideployment_co_owners.pybackend/api_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/api_v2/models.pybackend/api_v2/serializers.pybackend/api_v2/urls.pybackend/backend/serializers.pybackend/connector_v2/migrations/0006_connectorinstance_co_owners.pybackend/connector_v2/migrations/0007_backfill_creator_to_co_owners.pybackend/connector_v2/models.pybackend/connector_v2/serializers.pybackend/connector_v2/urls.pybackend/connector_v2/views.pybackend/permissions/co_owner_serializers.pybackend/permissions/co_owner_views.pybackend/permissions/permission.pybackend/permissions/tests/__init__.pybackend/permissions/tests/conftest.pybackend/pipeline_v2/migrations/0004_pipeline_co_owners.pybackend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/pipeline_v2/models.pybackend/pipeline_v2/serializers/crud.pybackend/pipeline_v2/serializers/sharing.pybackend/pipeline_v2/urls.pybackend/pipeline_v2/views.pybackend/prompt_studio/prompt_studio_core_v2/migrations/0007_customtool_co_owners.pybackend/prompt_studio/prompt_studio_core_v2/migrations/0008_backfill_creator_to_co_owners.pybackend/prompt_studio/prompt_studio_core_v2/models.pybackend/prompt_studio/prompt_studio_core_v2/serializers.pybackend/prompt_studio/prompt_studio_core_v2/urls.pybackend/prompt_studio/prompt_studio_core_v2/views.pybackend/workflow_manager/workflow_v2/migrations/0020_workflow_co_owners.pybackend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.pybackend/workflow_manager/workflow_v2/models/workflow.pybackend/workflow_manager/workflow_v2/permissions.pybackend/workflow_manager/workflow_v2/serializers.pybackend/workflow_manager/workflow_v2/urls/workflow.pybackend/workflow_manager/workflow_v2/views.pyfrontend/src/components/agency/configure-connector-modal/ConfigureConnectorModal.jsxfrontend/src/components/agency/ds-settings-card/DsSettingsCard.jsxfrontend/src/components/custom-tools/add-custom-tool-form-modal/AddCustomToolFormModal.jsxfrontend/src/components/custom-tools/list-of-tools/ListOfTools.jsxfrontend/src/components/custom-tools/tool-ide/ToolIde.jsxfrontend/src/components/custom-tools/view-tools/ViewTools.jsxfrontend/src/components/deployments/api-deployment/ApiDeployment.jsxfrontend/src/components/deployments/api-deployment/api-deployments-service.jsfrontend/src/components/deployments/create-api-deployment-modal/CreateApiDeploymentModal.jsxfrontend/src/components/pipelines-or-deployments/pipeline-service.jsfrontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsxfrontend/src/components/tool-settings/tool-settings/ToolSettings.jsxfrontend/src/components/widgets/co-owner-management/CoOwnerManagement.cssfrontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsxfrontend/src/components/widgets/list-view/ListView.cssfrontend/src/components/widgets/list-view/ListView.jsxfrontend/src/components/widgets/share-permission/SharePermission.jsxfrontend/src/components/workflows/workflow/Workflows.jsxfrontend/src/components/workflows/workflow/workflow-service.jsfrontend/src/hooks/useCoOwnerManagement.jsxfrontend/src/hooks/useExceptionHandler.jsxfrontend/src/pages/ConnectorsPage.jsx
| def validate(self, attrs: dict[str, Any]) -> dict[str, Any]: | ||
| """Validate removal won't leave resource without owners.""" | ||
| resource: models.Model = self.context["resource"] | ||
| user_to_remove: User = self.context["user_to_remove"] | ||
|
|
||
| # co_owners is the single source of truth (creator is always in it) | ||
| if not resource.co_owners.filter(id=user_to_remove.id).exists(): | ||
| raise serializers.ValidationError("User is not an owner of this resource.") | ||
|
|
||
| if resource.co_owners.count() <= 1: | ||
| raise serializers.ValidationError( | ||
| "Cannot remove the last owner. " | ||
| "Add another owner before removing this one." | ||
| ) | ||
|
|
||
| return attrs | ||
|
|
||
| def save(self, **kwargs: Any) -> models.Model: | ||
| """Remove user as owner. created_by is audit-only and never changes.""" | ||
| resource: models.Model = self.context["resource"] | ||
| user_to_remove: User = self.context["user_to_remove"] | ||
| resource.co_owners.remove(user_to_remove) | ||
| return resource |
There was a problem hiding this comment.
Owner-removal safety check is non-atomic and can be bypassed by concurrent requests.
The count check happens in validate() and removal happens later in save(). Two concurrent removals can both pass validation and leave the resource with zero owners.
🔒 Suggested fix (atomic re-check at write time)
-from django.db import models
+from django.db import models, transaction
@@
def save(self, **kwargs: Any) -> models.Model:
"""Remove user as owner. created_by is audit-only and never changes."""
resource: models.Model = self.context["resource"]
user_to_remove: User = self.context["user_to_remove"]
- resource.co_owners.remove(user_to_remove)
- return resource
+ with transaction.atomic():
+ locked = type(resource).objects.select_for_update().get(pk=resource.pk)
+ if locked.co_owners.count() <= 1:
+ raise serializers.ValidationError(
+ "Cannot remove the last owner. "
+ "Add another owner before removing this one."
+ )
+ locked.co_owners.remove(user_to_remove)
+ return locked🧰 Tools
🪛 Ruff (0.15.2)
[warning] 79-79: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 82-85: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 89-89: Unused method argument: kwargs
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/permissions/co_owner_serializers.py` around lines 72 - 94, The
current two-step check in validate() and the actual removal in save() is
race-prone; change save() to perform an atomic re-check-and-remove: open a
transaction (transaction.atomic), re-fetch the resource with a DB lock
(select_for_update) or otherwise lock the owning relation, verify the user is
still in resource.co_owners and that resource.co_owners.count() > 1, then call
resource.co_owners.remove(user_to_remove); if the re-check fails raise
serializers.ValidationError. Keep validate() as a fast pre-check but make save()
the authoritative, locked check-and-remove to prevent concurrent removals.
| logger.info( | ||
| "Co-owner %s added to %s %s by %s", | ||
| user.email, | ||
| resource.__class__.__name__, | ||
| resource.pk, | ||
| request.user.email, | ||
| ) |
There was a problem hiding this comment.
Avoid logging raw email addresses in ownership audit logs.
These logs add persistent PII. Prefer stable identifiers (e.g., user IDs) and keep emails out of info logs.
🔧 Proposed fix
logger.info(
- "Co-owner %s added to %s %s by %s",
- user.email,
+ "Co-owner user_id=%s added to %s %s by user_id=%s",
+ user.id,
resource.__class__.__name__,
resource.pk,
- request.user.email,
+ request.user.id,
)
@@
logger.info(
- "Owner %s removed from %s %s by %s",
- user_to_remove.email,
+ "Owner user_id=%s removed from %s %s by user_id=%s",
+ user_to_remove.id,
resource.__class__.__name__,
resource.pk,
- request.user.email,
+ request.user.id,
)Also applies to: 76-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/permissions/co_owner_views.py` around lines 39 - 45, Replace PII in
the ownership audit logger calls in co_owner_views.py: change the logger.info
invocations that currently interpolate user.email and request.user.email to use
stable non-PII identifiers (e.g., user.id and request.user.id or user.pk and
request.user.pk) while keeping resource.__class__.__name__ and resource.pk;
update both the log message template and the interpolated variables in the
logger.info calls (the occurrences around the shown block and the similar block
at lines 76-82) so emails are not written to persistent logs.
| url_path="owners/(?P<user_id>[^/.]+)", | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n backend/permissions/co_owner_views.py | head -100Repository: Zipstack/unstract
Length of output: 3335
🏁 Script executed:
# Verify the pattern behavior and check if User.id is indeed numeric
rg -n "User.*id.*auto|AutoField|BigAutoField" --type py backend/Repository: Zipstack/unstract
Length of output: 2154
🏁 Script executed:
# Check the RemoveCoOwnerSerializer to see if authorization is handled there
rg -n "class RemoveCoOwnerSerializer" -A 30 backend/permissions/co_owner_serializers.pyRepository: Zipstack/unstract
Length of output: 1326
Constrain user_id route parameter to numeric values and scope user lookup to resource co-owners.
The route pattern (?P<user_id>[^/.]+) accepts non-numeric values (e.g., "abc"), and the lookup get_object_or_404(User, id=user_id) performs a global User query before the serializer validates co-owner status. This causes two security issues:
- User existence leakage: The global lookup exposes whether any user exists in the system, bypassing authorization checks
- Type mismatch errors: Non-numeric inputs cause database errors or 500s instead of 404s
Scope the lookup to the resource's co-owners to enforce authorization and constrain the route to numeric IDs.
🔧 Proposed fix
`@action`(
detail=True,
methods=["delete"],
- url_path="owners/(?P<user_id>[^/.]+)",
+ url_path=r"owners/(?P<user_id>\d+)",
)
def remove_co_owner(
self, request: Request, pk: Any = None, user_id: Any = None
) -> Response:
"""Remove a co-owner from the resource."""
resource = self.get_object() # type: ignore[attr-defined]
- user_to_remove = get_object_or_404(User, id=user_id)
+ user_to_remove = get_object_or_404(resource.co_owners, id=user_id)Also applies to: 63-64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/permissions/co_owner_views.py` around lines 56 - 57, Constrain the
route parameter to digits and scope user lookups to the resource's co-owners:
update the url_path regex from (?P<user_id>[^/.]+) to (?P<user_id>\d+) (and the
duplicate occurrence around lines 63-64), and replace any global
get_object_or_404(User, id=user_id) calls with a scoped lookup against the
resource instance (e.g., use self.get_object() to fetch the resource and then
query its co-owners/owners relation for id=user_id, returning 404 if not found)
so non-numeric IDs return 404 and only co-owners can be resolved.
| def get_co_owners(self, obj): | ||
| """Get list of co-owners with their details.""" | ||
| return UserSerializer(obj.co_owners.all(), many=True).data |
There was a problem hiding this comment.
Use an email-based co-owner payload for contract consistency.
get_co_owners currently uses UserSerializer (id, username), while other co-owner endpoints in this PR expose id + email. Keeping this endpoint different can break/fragment the shared co-owner UI behavior.
💡 Proposed fix
def get_co_owners(self, obj):
"""Get list of co-owners with their details."""
- return UserSerializer(obj.co_owners.all(), many=True).data
+ return [{"id": user.id, "email": user.email} for user in obj.co_owners.all()]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/pipeline_v2/serializers/sharing.py` around lines 33 - 35,
get_co_owners currently returns full UserSerializer(obj.co_owners.all(),
many=True).data which emits id/username; change it to return an email-based
payload for contract consistency by using the serializer that exposes id and
email (e.g., UserEmailSerializer or CoOwnerEmailSerializer) or create a small
serializer that returns {"id", "email"} and call it with obj.co_owners.all(),
many=True so get_co_owners returns id+email for each co-owner.
| isPending ? ( | ||
| <Typography.Text> | ||
| <DeleteOutlined | ||
| className="action-icon-buttons" | ||
| onClick={() => handleRemovePending(item?.id)} | ||
| /> | ||
| </Typography.Text> |
There was a problem hiding this comment.
Use semantic buttons for remove actions (keyboard/a11y blocker).
Line [137]-[143] and Line [161]-[169] use bare icons as controls. These are not accessible interactive elements.
♿ Suggested fix
-import { Avatar, List, Modal, Popconfirm, Select, Typography } from "antd";
+import { Avatar, Button, List, Modal, Popconfirm, Select, Typography } from "antd";
...
-<Typography.Text>
- <DeleteOutlined
- className="action-icon-buttons"
- onClick={() => handleRemovePending(item?.id)}
- />
-</Typography.Text>
+<Button
+ type="text"
+ icon={<DeleteOutlined />}
+ className="action-icon-buttons"
+ aria-label={`Remove pending co-owner ${item?.username || item?.email}`}
+ onClick={() => handleRemovePending(item?.id)}
+/>
...
-<Typography.Text>
- <DeleteOutlined
- className="action-icon-buttons"
- style={{ opacity: removingUserId === item?.id ? 0.4 : 1 }}
- />
-</Typography.Text>
+<Button
+ type="text"
+ icon={<DeleteOutlined />}
+ className="action-icon-buttons"
+ aria-label={`Remove co-owner ${item?.username || item?.email}`}
+ style={{ opacity: removingUserId === item?.id ? 0.4 : 1 }}
+/>Also applies to: 161-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsx`
around lines 137 - 143, Replace the bare DeleteOutlined icons used as controls
with semantic, keyboard-focusable buttons (e.g., an HTML <button> or Ant Design
<Button type="text">) so the remove actions are accessible; attach the existing
handlers (handleRemovePending for the pending item and the corresponding handler
used in the other block) to the button's onClick, add an appropriate aria-label
(e.g., "Remove pending co-owner" / "Remove co-owner"), and ensure the element
supports keyboard activation (Enter/Space) and visible focus styling; update
both occurrences around handleRemovePending and the matching remove-existing
handler to follow this pattern.
| try { | ||
| const [usersResponse, sharedUsersResponse] = await Promise.all([ | ||
| service.getAllUsers(), | ||
| service.getSharedUsers(resourceId), | ||
| ]); | ||
|
|
||
| const userList = | ||
| usersResponse?.data?.members?.map((member) => ({ | ||
| id: member.id, | ||
| email: member.email, | ||
| })) || []; | ||
|
|
||
| setCoOwnerAllUsers(userList); | ||
| setCoOwnerData({ | ||
| coOwners: sharedUsersResponse.data?.co_owners || [], | ||
| createdBy: sharedUsersResponse.data?.created_by || null, | ||
| }); | ||
| } catch (err) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "useCoOwnerManagement.jsx" -type fRepository: Zipstack/unstract
Length of output: 106
🏁 Script executed:
cat -n ./frontend/src/hooks/useCoOwnerManagement.jsxRepository: Zipstack/unstract
Length of output: 5208
Prevent stale async responses from overwriting current resource state.
Lines 50–67 have a race condition when handleCoOwner is triggered rapidly for different resources. Since setCoOwnerResourceId is updated immediately but the async calls resolve unpredictably, an older response from one resource can overwrite coOwnerData for a newly selected resource, leaving the UI in an inconsistent state where the displayed data doesn't match the selected resource ID.
Use a request sequence number to discard stale responses before updating state, ensuring only the most recent request's data is displayed.
Proposed patch
-import { useState, useCallback } from "react";
+import { useState, useCallback, useRef } from "react";
@@
function useCoOwnerManagement({ service, setAlertDetails, onListRefresh }) {
const handleException = useExceptionHandler();
+ const coOwnerRequestSeq = useRef(0);
@@
const handleCoOwner = useCallback(
async (resourceId) => {
+ const requestSeq = ++coOwnerRequestSeq.current;
setCoOwnerResourceId(resourceId);
setCoOwnerLoading(true);
setCoOwnerOpen(true);
@@
const userList =
usersResponse?.data?.members?.map((member) => ({
id: member.id,
email: member.email,
})) || [];
+ if (requestSeq !== coOwnerRequestSeq.current) return;
setCoOwnerAllUsers(userList);
setCoOwnerData({
coOwners: sharedUsersResponse.data?.co_owners || [],
createdBy: sharedUsersResponse.data?.created_by || null,
});
} catch (err) {
+ if (requestSeq !== coOwnerRequestSeq.current) return;
setAlertDetails(
handleException(err, "Unable to fetch co-owner information")
);
setCoOwnerOpen(false);
} finally {
- setCoOwnerLoading(false);
+ if (requestSeq === coOwnerRequestSeq.current) {
+ setCoOwnerLoading(false);
+ }
}
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/hooks/useCoOwnerManagement.jsx` around lines 50 - 67, The async
fetch in handleCoOwner can race and older responses may overwrite state; add a
local request sequence counter (e.g., coOwnerRequestSeq) that you increment
before calling service.getAllUsers/service.getSharedUsers, capture the current
seq in the async context, and before calling setCoOwnerAllUsers or
setCoOwnerData verify the captured seq matches the latest coOwnerRequestSeq;
only update state when they match so stale responses are discarded and
setCoOwnerResourceId/setCoOwnerData reflect the most recent resource selection.
| if (err?.response?.status === 404) { | ||
| return { | ||
| type: "error", | ||
| content: | ||
| "This resource is no longer accessible. It may have been removed or your access has been revoked.", | ||
| title: title, | ||
| duration: duration, | ||
| }; | ||
| } |
There was a problem hiding this comment.
404 early-return silently discards backend error messages for all 404s app-wide.
Placing this guard before the err?.response?.data block means every 404 across the entire application — not just co-owner endpoints — now ignores whatever the backend actually said. DRF 404s routinely carry {"detail": "..."} (e.g., "Not found.", or a custom message from get_object_or_404). Those were previously surfaced via the commonErrorMessage extraction at line 52-53; now they are all silently replaced with the hardcoded string. The phrase "your access has been revoked" is also semantically wrong for unrelated 404 causes (typo'd URLs, genuinely deleted objects, etc.).
Prefer letting the existing commonErrorMessage path take precedence and using the generic string only as a fallback:
🐛 Proposed fix — demote to fallback inside the existing data block
- if (err?.response?.status === 404) {
- return {
- type: "error",
- content:
- "This resource is no longer accessible. It may have been removed or your access has been revoked.",
- title: title,
- duration: duration,
- };
- }
-
if (err?.response?.data) {
const responseData = err.response.data;
const { type, errors } = responseData;
// First, try to extract common API error messages (for DRF and other APIs)
const commonErrorMessage =
responseData.error || responseData.detail || responseData.message;
if (commonErrorMessage) {
return {
title: title,
type: "error",
content: commonErrorMessage,
duration: duration,
};
}
+ // For 404s with no specific message, surface a generic accessibility hint.
+ if (err.response.status === 404) {
+ return {
+ type: "error",
+ content:
+ "This resource is no longer accessible. It may have been removed or your access has been revoked.",
+ title: title,
+ duration: duration,
+ };
+ }
+
// Then handle specific error types
switch (type) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/hooks/useExceptionHandler.jsx` around lines 37 - 45, The current
early-return 404 block in useExceptionHandler (checking err?.response?.status
=== 404) prematurely overrides backend error payloads; remove or demote this
early-return and instead treat the hardcoded 404 message as a fallback inside
the existing err?.response?.data handling path: prefer the extracted
commonErrorMessage (the logic around commonErrorMessage / err?.response?.data)
first, and only return the generic "This resource is no longer accessible..."
message if no backend message exists; ensure you stop using the misleading "your
access has been revoked" phrasing so it isn't applied to unrelated 404s.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/api_v2/serializers.py (1)
429-469: 🛠️ Refactor suggestion | 🟠 Major
CoOwnerRepresentationMixinis inherited but its purpose is bypassed — DRY violation
APIDeploymentListSerializerinheritsCoOwnerRepresentationMixin(whose single method isadd_co_owner_fields) but then manually re-declaresco_owners_countandis_ownerasSerializerMethodFields and hand-rollsget_created_by_email,get_co_owners_count, andget_is_owner.WorkflowSerializerdelegates toadd_co_owner_fieldscorrectly into_representation. These two serializers should follow the same pattern.The manual
get_created_by_emailat Line 452-457 also inherits the same semantic bug as the mixin (first()is not guaranteed to be the creator — see note inco_owner_serializers.py).♻️ Proposed refactor — align with WorkflowSerializer's approach
class APIDeploymentListSerializer(CoOwnerRepresentationMixin, ModelSerializer): workflow_name = CharField(source="workflow.workflow_name", read_only=True) - created_by_email = SerializerMethodField() - co_owners_count = SerializerMethodField() - is_owner = SerializerMethodField() class Meta: model = APIDeployment fields = [ "id", "workflow", "workflow_name", "display_name", "description", "is_active", "api_endpoint", "api_name", "created_by", "created_by_email", "co_owners_count", "is_owner", ] - def get_created_by_email(self, obj): - """Get the email of the primary owner (first co-owner).""" - first_co_owner = obj.co_owners.first() - if first_co_owner: - return first_co_owner.email - return obj.created_by.email if obj.created_by else None - - def get_co_owners_count(self, obj): - """Get the number of co-owners.""" - return obj.co_owners.count() - - def get_is_owner(self, obj): - """Check if the current user is a co-owner.""" - request = self.context.get("request") - if request and hasattr(request, "user"): - return obj.co_owners.filter(pk=request.user.pk).exists() - return False + def to_representation(self, instance): + representation = super().to_representation(instance) + request = self.context.get("request") + self.add_co_owner_fields(instance, representation, request) + return representation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/api_v2/serializers.py` around lines 429 - 469, APIDeploymentListSerializer duplicates co-owner logic from CoOwnerRepresentationMixin and implements buggy get_created_by_email/get_co_owners_count/get_is_owner; replace the manual SerializerMethodFields and their getters by invoking the mixin’s add_co_owner_fields in to_representation (same pattern as WorkflowSerializer) and remove get_created_by_email, get_co_owners_count, and get_is_owner; also ensure add_co_owner_fields (from CoOwnerRepresentationMixin) handles the correct primary owner semantics (see co_owner_serializers.py) so the serializer no longer hand-rolls ownership fields and uses the shared implementation.
♻️ Duplicate comments (1)
backend/permissions/co_owner_serializers.py (1)
90-112: Non-atomic owner-count check and removal (already raised)The count guard in
validate()and the actualco_owners.remove()insave()are separate operations; two concurrent DELETE requests can both pass validation and leave the resource owner-less.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 90 - 112, The validate/save sequence is vulnerable to a race: both validate() and save() use resource.co_owners separately so concurrent removals can pass validation and leave no owners; fix by making the removal atomic inside save(): wrap the operation in a database transaction, re-check the owner count while locking the resource (e.g., reload resource with select_for_update() or equivalent) and raise serializers.ValidationError if count <= 1, then perform resource.co_owners.remove(user_to_remove) and return resource; keep validate() for quick checks but ensure the authoritative check + removal happens under the transaction in save().
🧹 Nitpick comments (5)
backend/permissions/tests/test_co_owners.py (3)
92-100: Missing positive case: creator should still have permission on objects withoutco_ownersThe test correctly verifies
other_useris denied, but there is no assertion that thecreatoris still granted permission on the same spec-constrained resource. A future regression that removes thecreated_by == request.usershortcut inIsOwnerwould go undetected.✅ Suggested additional test case
def test_creator_has_permission_without_co_owners_field(self) -> None: """Creator retains permission even when resource lacks co_owners.""" resource = Mock(spec=["created_by"]) resource.created_by = self.creator request = self._make_request(self.creator) self.assertTrue( self.permission.has_object_permission(request, None, resource) )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/tests/test_co_owners.py` around lines 92 - 100, Add a positive test asserting the creator still has permission for objects missing co_owners: keep the same Mock(resource) with spec=["created_by"] and resource.created_by = self.creator, call request = self._make_request(self.creator), and assert self.permission.has_object_permission(request, None, resource) returns True (e.g., add a new test method like test_creator_has_permission_without_co_owners_field that mirrors test_object_without_co_owners_field but uses self.creator).
103-139:TestIsOwnerOrSharedUserPermissionandTestIsOwnerOrSharedUserOrSharedToOrgPermissionare missingtest_creator_has_permissionBoth permission classes grant access to the creator via
obj.created_by == request.userbefore the co-owner check, but neither test class exercises that path. A regression to either permission class's creator shortcut would be invisible.✅ Suggested test cases (one each)
# In TestIsOwnerOrSharedUserPermission def test_creator_has_permission(self) -> None: request = self._make_request(self.creator) self.assertTrue( self.permission.has_object_permission(request, None, self.resource) ) # In TestIsOwnerOrSharedUserOrSharedToOrgPermission def test_creator_has_permission(self) -> None: request = self._make_request(self.creator) self.assertTrue( self.permission.has_object_permission(request, None, self.resource) )Also applies to: 142-168
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/tests/test_co_owners.py` around lines 103 - 139, Add a test named test_creator_has_permission to both TestIsOwnerOrSharedUserPermission and TestIsOwnerOrSharedUserOrSharedToOrgPermission that calls request = self._make_request(self.creator) and asserts self.permission.has_object_permission(request, None, self.resource) is True; this exercises the creator shortcut in the permission classes (look for the has_object_permission method, self.permission, self.creator, self._make_request and self.resource in each test class).
263-286:test_save_adds_co_ownermasks the doubleUser.objects.getcall
mock_user_objects.getis patched as a single return value, so both the call invalidate_user_idand the one insave()silently succeed. The test passes but doesn't surface the redundant DB query. Consider assertingmock_user_objects.get.call_count == 1after the fix toAddCoOwnerSerializeris applied; for now it documents the accepted behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/tests/test_co_owners.py` around lines 263 - 286, The test masks a duplicate DB call because mock_user_objects.get is a single return value used in both AddCoOwnerSerializer.validate_user_id and AddCoOwnerSerializer.save; change the test to detect the duplicate by either configuring mock_user_objects.get.side_effect (e.g., return new_user then raise if called again) or add an explicit assertion after serializer.save(): assert mock_user_objects.get.call_count == 1, referencing mock_user_objects.get, AddCoOwnerSerializer.validate_user_id and AddCoOwnerSerializer.save so the test will fail if two User.objects.get calls occur.backend/permissions/co_owner_serializers.py (2)
78-84:**kwargsis declared but never used in bothsave()methodsRuff ARG002 flags this at lines 78 and 107. Drop the parameter or suppress intentionally.
♻️ Proposed fix
- def save(self, **kwargs: Any) -> models.Model: + def save(self) -> models.Model:Apply the same change to
RemoveCoOwnerSerializer.save().Also applies to: 107-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 78 - 84, Remove the unused **kwargs parameter from the save methods to satisfy ARG002: change the signatures of AddCoOwnerSerializer.save and RemoveCoOwnerSerializer.save to def save(self) -> models.Model: (and keep the body unchanged), or if you must preserve the signature for compatibility, explicitly accept **kwargs and add a suppression comment (e.g., # noqa: ARG002) above the method; apply the same change to both save implementations.
70-84:User.objects.getis called twice — once invalidate_user_id, once insave()
validate_user_idalready fetches theUserinstance but discards it (returns the rawint).save()then queries again. Store the resolved user onvalidated_datain the validator to avoid the redundant round-trip.♻️ Proposed refactor
def validate_user_id(self, value: int) -> int: ... user = User.objects.get(id=value) if resource.co_owners.filter(id=user.id).exists(): raise serializers.ValidationError("User is already an owner.") - return value + # Store resolved user so save() doesn't need a second query + self._validated_user = user + return value def save(self, **kwargs: Any) -> models.Model: resource: models.Model = self.context["resource"] - user_id = self.validated_data["user_id"] - user = User.objects.get(id=user_id) + user = self._validated_user resource.co_owners.add(user) return resource🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 70 - 84, The validator validate_user_id currently fetches the User with User.objects.get(id=value) but returns the raw id, causing save() to fetch the User again; change validate_user_id to store the retrieved User instance into self.validated_data (e.g., set self.validated_data["user"] = user) and return the id (or the user as appropriate), then update save() to use self.validated_data["user"] instead of re-querying via User.objects.get — update references to resource.co_owners.add(...) to pass the stored User instance; keep function names validate_user_id and save and the resource/co_owners usage intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/permissions/co_owner_serializers.py`:
- Around line 39-44: The code currently uses instance.co_owners.first() which
can return a non-creator; instead prefer the actual creator: set
created_by_email to instance.created_by.email if instance.created_by is present,
otherwise fall back to the first co_owner (instance.co_owners.first()) or None.
Update the assignment for created_by_email to check instance.created_by first
(use its email), then use first_co_owner.email only when created_by is unset.
---
Outside diff comments:
In `@backend/api_v2/serializers.py`:
- Around line 429-469: APIDeploymentListSerializer duplicates co-owner logic
from CoOwnerRepresentationMixin and implements buggy
get_created_by_email/get_co_owners_count/get_is_owner; replace the manual
SerializerMethodFields and their getters by invoking the mixin’s
add_co_owner_fields in to_representation (same pattern as WorkflowSerializer)
and remove get_created_by_email, get_co_owners_count, and get_is_owner; also
ensure add_co_owner_fields (from CoOwnerRepresentationMixin) handles the correct
primary owner semantics (see co_owner_serializers.py) so the serializer no
longer hand-rolls ownership fields and uses the shared implementation.
---
Duplicate comments:
In `@backend/permissions/co_owner_serializers.py`:
- Around line 90-112: The validate/save sequence is vulnerable to a race: both
validate() and save() use resource.co_owners separately so concurrent removals
can pass validation and leave no owners; fix by making the removal atomic inside
save(): wrap the operation in a database transaction, re-check the owner count
while locking the resource (e.g., reload resource with select_for_update() or
equivalent) and raise serializers.ValidationError if count <= 1, then perform
resource.co_owners.remove(user_to_remove) and return resource; keep validate()
for quick checks but ensure the authoritative check + removal happens under the
transaction in save().
---
Nitpick comments:
In `@backend/permissions/co_owner_serializers.py`:
- Around line 78-84: Remove the unused **kwargs parameter from the save methods
to satisfy ARG002: change the signatures of AddCoOwnerSerializer.save and
RemoveCoOwnerSerializer.save to def save(self) -> models.Model: (and keep the
body unchanged), or if you must preserve the signature for compatibility,
explicitly accept **kwargs and add a suppression comment (e.g., # noqa: ARG002)
above the method; apply the same change to both save implementations.
- Around line 70-84: The validator validate_user_id currently fetches the User
with User.objects.get(id=value) but returns the raw id, causing save() to fetch
the User again; change validate_user_id to store the retrieved User instance
into self.validated_data (e.g., set self.validated_data["user"] = user) and
return the id (or the user as appropriate), then update save() to use
self.validated_data["user"] instead of re-querying via User.objects.get — update
references to resource.co_owners.add(...) to pass the stored User instance; keep
function names validate_user_id and save and the resource/co_owners usage
intact.
In `@backend/permissions/tests/test_co_owners.py`:
- Around line 92-100: Add a positive test asserting the creator still has
permission for objects missing co_owners: keep the same Mock(resource) with
spec=["created_by"] and resource.created_by = self.creator, call request =
self._make_request(self.creator), and assert
self.permission.has_object_permission(request, None, resource) returns True
(e.g., add a new test method like
test_creator_has_permission_without_co_owners_field that mirrors
test_object_without_co_owners_field but uses self.creator).
- Around line 103-139: Add a test named test_creator_has_permission to both
TestIsOwnerOrSharedUserPermission and
TestIsOwnerOrSharedUserOrSharedToOrgPermission that calls request =
self._make_request(self.creator) and asserts
self.permission.has_object_permission(request, None, self.resource) is True;
this exercises the creator shortcut in the permission classes (look for the
has_object_permission method, self.permission, self.creator, self._make_request
and self.resource in each test class).
- Around line 263-286: The test masks a duplicate DB call because
mock_user_objects.get is a single return value used in both
AddCoOwnerSerializer.validate_user_id and AddCoOwnerSerializer.save; change the
test to detect the duplicate by either configuring
mock_user_objects.get.side_effect (e.g., return new_user then raise if called
again) or add an explicit assertion after serializer.save(): assert
mock_user_objects.get.call_count == 1, referencing mock_user_objects.get,
AddCoOwnerSerializer.validate_user_id and AddCoOwnerSerializer.save so the test
will fail if two User.objects.get calls occur.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (4)
backend/api_v2/serializers.pybackend/permissions/co_owner_serializers.pybackend/permissions/tests/test_co_owners.pybackend/workflow_manager/workflow_v2/serializers.py
| first_co_owner = instance.co_owners.first() | ||
| representation["created_by_email"] = ( | ||
| first_co_owner.email | ||
| if first_co_owner | ||
| else (instance.created_by.email if instance.created_by else None) | ||
| ) |
There was a problem hiding this comment.
created_by_email may return a non-creator's email
co_owners.first() returns the first row by database ordering, which is not guaranteed to be the creator. After the backfill migration the creator is one of many co-owners, but their position in the M2M relation is non-deterministic. The field name created_by_email implies the original creator, so clients (and the frontend) will silently display the wrong person.
The intent appears to be: show the creator's email, with a fallback when created_by is unset. The co-owner count on the mixin is a separate concern.
🐛 Proposed fix
- first_co_owner = instance.co_owners.first()
- representation["created_by_email"] = (
- first_co_owner.email
- if first_co_owner
- else (instance.created_by.email if instance.created_by else None)
- )
+ representation["created_by_email"] = (
+ instance.created_by.email if instance.created_by else None
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/permissions/co_owner_serializers.py` around lines 39 - 44, The code
currently uses instance.co_owners.first() which can return a non-creator;
instead prefer the actual creator: set created_by_email to
instance.created_by.email if instance.created_by is present, otherwise fall back
to the first co_owner (instance.co_owners.first()) or None. Update the
assignment for created_by_email to check instance.created_by first (use its
email), then use first_co_owner.email only when created_by is unset.
Test ResultsSummary
Runner Tests - Full Report
SDK1 Tests - Full Report
|
|
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (5)
backend/permissions/co_owner_serializers.py (2)
39-46:⚠️ Potential issue | 🟠 MajorPrefer
created_byfirst forcreated_by_emailto avoid wrong owner display.Line 39 uses
instance.co_owners.first(), which is nondeterministic and can return a non-creator. Forcreated_by_email, useinstance.created_byfirst, then fallback to first co-owner only whencreated_byis unset.🐛 Proposed fix
- first_co_owner = instance.co_owners.first() - if first_co_owner: - created_by_email = first_co_owner.email - elif instance.created_by: + first_co_owner = instance.co_owners.first() + if instance.created_by: created_by_email = instance.created_by.email + elif first_co_owner: + created_by_email = first_co_owner.email else: created_by_email = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 39 - 46, The code sets created_by_email from instance.co_owners.first() which can be nondeterministic; change the logic in the serializer that builds representation["created_by_email"] to prefer instance.created_by.email first, and only if instance.created_by is falsy fall back to instance.co_owners.first().email (or None if neither exists); update the block referencing instance.co_owners.first(), instance.created_by, and representation["created_by_email"] accordingly.
109-114:⚠️ Potential issue | 🟠 MajorMake owner removal atomic; current check-remove flow is race-prone.
Line 109 removes without an authoritative locked re-check. Two concurrent removals can both pass Line 101 and leave zero owners.
🔒 Proposed fix (transaction + row lock + re-check in save)
-from django.db import models +from django.db import models, transaction @@ def save(self, **kwargs: Any) -> models.Model: """Remove user as owner. created_by is audit-only and never changes.""" resource: models.Model = self.context["resource"] user_to_remove: User = self.context["user_to_remove"] - resource.co_owners.remove(user_to_remove) - return resource + with transaction.atomic(): + locked_resource = ( + type(resource).objects.select_for_update().get(pk=resource.pk) + ) + if not locked_resource.co_owners.filter(id=user_to_remove.id).exists(): + raise serializers.ValidationError( + "User is not an owner of this resource." + ) + if locked_resource.co_owners.count() <= 1: + raise serializers.ValidationError( + "Cannot remove the last owner. " + "Add another owner before removing this one." + ) + locked_resource.co_owners.remove(user_to_remove) + return locked_resource🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/permissions/co_owner_serializers.py` around lines 109 - 114, The save method in co_owner_serializers.py is race-prone because it removes a co-owner without a transactional row lock and re-check; wrap the removal in a transaction.atomic, re-fetch the resource with a SELECT FOR UPDATE (e.g., via resource.__class__.objects.select_for_update().get(pk=resource.pk) or similar) to lock the row, re-check the authoritative set/count of co-owners to ensure removal won't leave zero owners, raise a ValidationError if it would, otherwise remove user_to_remove and save the resource, and return it; use the existing self.context["resource"] and self.context["user_to_remove"] to locate the resource and user.frontend/src/components/widgets/list-view/ListView.jsx (1)
47-50:⚠️ Potential issue | 🟠 Major
handleCoOwner(event, tool)still breaks single-argument handlers in Pipelines and ApiDeployment.
handleCoOwneris invoked with(event, tool)on Line 49, butApiDeployment.jsxandPipelines.jsxdefinehandleCoOwner = (record) => { const row = record || selectedRow; ... }— a single-argument signature. The event object is received asrecord, sorow.idresolves toundefinedandhandleCoOwnerAction(undefined)is dispatched.🐛 Proposed fix — dispatch based on handler arity
const handleCoOwnerClick = (event, tool) => { event.stopPropagation(); - handleCoOwner(event, tool); + if (!handleCoOwner) return; + if (handleCoOwner.length >= 2) { + handleCoOwner(event, tool); + } else { + handleCoOwner(tool); + } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/widgets/list-view/ListView.jsx` around lines 47 - 50, handleCoOwnerClick currently always calls handleCoOwner(event, tool) which breaks single-argument handlers like the ones in ApiDeployment.jsx and Pipelines.jsx; update handleCoOwnerClick to check the arity of the passed handler (handleCoOwner) and dispatch accordingly: if handleCoOwner.length === 1 call handleCoOwner(tool) (so record/tool is passed as the single arg), otherwise call handleCoOwner(event, tool); reference the functions handleCoOwnerClick and handleCoOwner and ensure the new branch preserves event.stopPropagation().frontend/src/components/deployments/api-deployment/ApiDeployment.jsx (1)
171-196:⚠️ Potential issue | 🟡 MinorAdd an explicit accessible name for the owner trigger.
Line 172 uses a custom button-like
<span>with keyboard handlers, but it still lacks an action-oriented accessible label. Addaria-labelso screen readers announce intent, not just owner text.Suggested fix
<span style={{ cursor: "pointer", color: "#1890ff", textDecoration: "underline", textDecorationStyle: "dotted", }} role="button" tabIndex={0} + aria-label={`Manage co-owners for ${ + isOwner ? "You" : record?.created_by_email || "Unknown" + }`} onClick={() => { setSelectedRow(record); handleCoOwner(record); }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx` around lines 171 - 196, The span used as the clickable owner trigger (the element that calls setSelectedRow(record) and handleCoOwner(record)) needs an explicit action-oriented accessible name: add an aria-label to the span (e.g., "Manage co-owners for {record?.created_by_email || 'Unknown'}" or "Manage co-owners for You" when isOwner) so screen readers announce the intent rather than just the owner text; keep the existing role="button", tabIndex and keyboard handlers but ensure the aria-label is computed from record and isOwner so it remains descriptive and unique per row.frontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsx (1)
689-715: Prefer a semantic<button>over ARIA-emulated<span>for the owner trigger.
This still relies on manual keyboard semantics. A native button simplifies accessibility and interaction handling.♿ Suggested refactor
<Tooltip title="Manage Co-Owners"> - <span + <button + type="button" style={{ cursor: "pointer", color: "#1890ff", textDecoration: "underline", textDecorationStyle: "dotted", + background: "none", + border: "none", + padding: 0, }} - role="button" - tabIndex={0} onClick={() => { setSelectedPorD(record); handleCoOwner(record); }} - onKeyDown={(e) => { - if (e.key === "Enter" || e.key === " ") { - e.preventDefault(); - setSelectedPorD(record); - handleCoOwner(record); - } - }} + aria-label={`Manage co-owners for ${record?.pipeline_name || "pipeline"}`} > {isOwner ? "You" : email?.split("@")[0] || "Unknown"} {record?.co_owners_count > 1 && ` +${record.co_owners_count - 1}`} - </span> + </button> </Tooltip>#!/bin/bash # Verify interactive non-semantic spans in this component rg -nP --type=jsx -C2 '<span[^>]*(role="button"|onClick=)' frontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsx🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsx` around lines 689 - 715, Replace the non-semantic interactive <span> used to trigger owner management with a native <button> to leverage built-in keyboard and accessibility behavior: update the element that currently uses role="button", tabIndex, onKeyDown and onClick (around the setSelectedPorD(record); handleCoOwner(record); calls) to a <button> that preserves the existing styling and click handler, remove the manual key handling and role/tabIndex attributes, and ensure the button text generation still uses isOwner, email?.split("@")[0] (or "Unknown") and the co_owners_count suffix; keep references to setSelectedPorD and handleCoOwner so the click behavior is unchanged.
🧹 Nitpick comments (7)
frontend/src/components/deployments/api-deployment/ApiDeployment.jsx (1)
609-620: Pass safe defaults to CoOwnerManagement props.Line 615/616 directly dereference
coOwnerData; defaulting avoids transient undefined access issues during initial load.Suggested fix
<CoOwnerManagement open={coOwnerOpen} setOpen={setCoOwnerOpen} resourceId={selectedRow?.id} resourceType="API Deployment" allUsers={coOwnerAllUsers} - coOwners={coOwnerData.coOwners} - createdBy={coOwnerData.createdBy} + coOwners={coOwnerData?.coOwners ?? []} + createdBy={coOwnerData?.createdBy} loading={coOwnerLoading} onAddCoOwner={onAddCoOwner} onRemoveCoOwner={onRemoveCoOwner} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx` around lines 609 - 620, The CoOwnerManagement props dereference coOwnerData and selectedRow directly causing transient undefined access; update the JSX to pass safe defaults: use selectedRow?.id ?? null for resourceId, coOwnerAllUsers ?? [] for allUsers, (coOwnerData?.coOwners ?? []) for coOwners, and (coOwnerData?.createdBy ?? null) for createdBy (keep existing loading, onAddCoOwner, onRemoveCoOwner props unchanged) so the component always receives defined values during initial loads.backend/connector_v2/migrations/0007_backfill_creator_to_co_owners.py (1)
4-21: Stream this backfill and reduce transaction scope.Line 6 currently processes rows without chunking and the migration is fully atomic. For high row counts, this can cause long locks during deploy. Recommend
iterator(chunk_size=...)andMigration.atomic = False.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): connector_instance_model = apps.get_model("connector_v2", "ConnectorInstance") - for connector in connector_instance_model.objects.filter(created_by__isnull=False): + queryset = ( + connector_instance_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for connector in queryset: if not connector.co_owners.filter(id=connector.created_by_id).exists(): connector.co_owners.add(connector.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("connector_v2", "0006_connectorinstance_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/connector_v2/migrations/0007_backfill_creator_to_co_owners.py` around lines 4 - 21, The migration backfill backfill_creator_to_co_owners currently iterates all ConnectorInstance rows in one transaction causing long locks; modify it to set Migration.atomic = False and iterate using connector_instance_model.objects.filter(created_by__isnull=False).iterator(chunk_size=500) (or another reasonable chunk_size) inside backfill_creator_to_co_owners, and keep the reverse_code as migrations.RunPython.noop so the operation is non-atomic and processes rows in small chunks to reduce transaction scope and lock time.backend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.py (1)
4-21: Use chunked reads and non-atomic execution for safer backfills.Line 6 currently scans in a standard loop with default atomic migration behavior. This is fine for small data, but risky for larger tables during release windows. Suggest
iterator(chunk_size=...)andMigration.atomic = False.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): pipeline_model = apps.get_model("pipeline_v2", "Pipeline") - for pipeline in pipeline_model.objects.filter(created_by__isnull=False): + queryset = ( + pipeline_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for pipeline in queryset: if not pipeline.co_owners.filter(id=pipeline.created_by_id).exists(): pipeline.co_owners.add(pipeline.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("pipeline_v2", "0004_pipeline_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.py` around lines 4 - 21, The migration currently iterates over pipeline_model.objects.filter(created_by__isnull=False) in an atomic operation which can lock large tables; update the backfill_creator_to_co_owners function to iterate using pipeline_model.objects.filter(created_by__isnull=False).iterator(chunk_size=1000) (or another sensible chunk_size) and reference pipeline.created_by_id as before, and set Migration.atomic = False on the Migration class to disable wrapping the whole operation in a transaction so the backfill runs safely in chunks.backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.py (1)
4-21: Make this migration friendlier to large datasets.Line 6 iterates unchunked and runs in a single atomic migration transaction. For large adapter counts this can extend lock times. Recommend chunked iteration and setting
atomic = False.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): adapter_instance_model = apps.get_model("adapter_processor_v2", "AdapterInstance") - for adapter in adapter_instance_model.objects.filter(created_by__isnull=False): + queryset = ( + adapter_instance_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for adapter in queryset: if not adapter.co_owners.filter(id=adapter.created_by_id).exists(): adapter.co_owners.add(adapter.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("adapter_processor_v2", "0004_adapterinstance_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.py` around lines 4 - 21, The migration backfill_creator_to_co_owners currently iterates all AdapterInstance rows inside a single atomic transaction causing long locks; modify backfill_creator_to_co_owners to process records in chunks (e.g., iterate over AdapterInstance ids via values_list or iterator and then load/update batches of a fixed size like 500–1000) and perform the co_owners.add logic per batch to avoid holding a long transaction, and set Migration.atomic = False on the Migration class so the migration is not run in one big transaction; keep references to AdapterInstance via apps.get_model and ensure existence checks use filtered batch queries to minimize DB hits.backend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.py (1)
4-21: Consider chunked iteration + non-atomic migration for large datasets.Line 6 + default atomic behavior can make this migration expensive in production when many workflows exist. Use batched iterator processing and set
atomic = Falseto reduce lock duration.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): workflow_model = apps.get_model("workflow_v2", "Workflow") - for workflow in workflow_model.objects.filter(created_by__isnull=False): + queryset = ( + workflow_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for workflow in queryset: if not workflow.co_owners.filter(id=workflow.created_by_id).exists(): workflow.co_owners.add(workflow.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("workflow_v2", "0020_workflow_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.py` around lines 4 - 21, Change the migration to be non-atomic and iterate in batches: set Migration.atomic = False on the Migration class and modify backfill_creator_to_co_owners to use chunked queryset iteration (e.g., iterate workflow_model.objects.filter(created_by__isnull=False).order_by("pk").iterator() in batches or use slicing with a page size) so you add created_by to co_owners in chunks (check workflow.co_owners.filter(id=workflow.created_by_id).exists() and then workflow.co_owners.add(workflow.created_by)) to avoid long locks; keep the same RunPython(backfill_creator_to_co_owners, reverse_code=...) operation entry.backend/api_v2/migrations/0005_backfill_creator_to_co_owners.py (1)
4-21: Harden migration runtime behavior for production scale.Line 6 processes rows in a non-chunked loop under a single transaction. For large deployment tables, this can inflate migration time and lock contention. Prefer
iterator(chunk_size=...)andatomic = False.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): api_deployment_model = apps.get_model("api_v2", "APIDeployment") - for deployment in api_deployment_model.objects.filter(created_by__isnull=False): + queryset = ( + api_deployment_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for deployment in queryset: if not deployment.co_owners.filter(id=deployment.created_by_id).exists(): deployment.co_owners.add(deployment.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("api_v2", "0004_apideployment_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/api_v2/migrations/0005_backfill_creator_to_co_owners.py` around lines 4 - 21, The migration iterates all APIDeployment rows in a single transaction and non-chunked loop, causing long locks at scale; modify backfill_creator_to_co_owners to iterate using deployments = api_deployment_model.objects.filter(created_by__isnull=False).iterator(chunk_size=1000) (or another sensible chunk) and update each chunked record as currently done, and make the Migration class non-atomic by setting atomic = False so RunPython(backfill_creator_to_co_owners, reverse_code=...) runs outside a single DB transaction; update the Migration.operations entry (which uses RunPython) and the Migration class accordingly.backend/prompt_studio/prompt_studio_core_v2/migrations/0008_backfill_creator_to_co_owners.py (1)
4-21: Backfill should stream rows and avoid a single long transaction.Line 6 iterates without chunking, and the migration runs in one atomic transaction by default. On large tables this can increase lock time and migration timeout risk. Consider streaming with
iterator()and settingatomic = Falsefor this data backfill.Proposed change
def backfill_creator_to_co_owners(apps, schema_editor): custom_tool_model = apps.get_model("prompt_studio_core_v2", "CustomTool") - for tool in custom_tool_model.objects.filter(created_by__isnull=False): + queryset = ( + custom_tool_model.objects.filter(created_by__isnull=False) + .select_related("created_by") + .iterator(chunk_size=1000) + ) + for tool in queryset: if not tool.co_owners.filter(id=tool.created_by_id).exists(): tool.co_owners.add(tool.created_by) @@ class Migration(migrations.Migration): + atomic = False dependencies = [ ("prompt_studio_core_v2", "0007_customtool_co_owners"), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/prompt_studio/prompt_studio_core_v2/migrations/0008_backfill_creator_to_co_owners.py` around lines 4 - 21, The migration currently loops all CustomTool rows in one atomic transaction; update backfill_creator_to_co_owners to stream results by using custom_tool_model.objects.filter(created_by__isnull=False).iterator() so rows are fetched in chunks and ensure each tool is processed without loading the whole queryset, and mark the migration non-atomic by setting Migration.atomic = False (referencing the Migration class in the file) so the RunPython backfill runs outside a single long transaction; keep the existing co_owners.add(tool.created_by) logic but iterate with iterator() to reduce memory and lock time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx`:
- Around line 356-359: The handleCoOwner function can call handleCoOwnerAction
with an undefined id when record is falsy and selectedRow is empty/stale; update
handleCoOwner to validate the resolved row and its id before calling
handleCoOwnerAction (e.g., compute const row = record || selectedRow; if (!row
|| !row.id) return or surface a user error/notification) so handleCoOwnerAction
is only invoked with a valid id (refer to handleCoOwner, handleCoOwnerAction,
selectedRow, record, row.id).
In `@frontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsx`:
- Around line 362-365: The helper handleCoOwner can call handleCoOwnerAction
with an undefined id when called without a valid record; update handleCoOwner to
guard against missing row or missing row.id (use the record || selectedPorD
logic already present), and if no id is available return early (or show an
appropriate UI/error) instead of calling handleCoOwnerAction; reference the
handleCoOwner function, the selectedPorD variable, and handleCoOwnerAction to
locate and implement the check.
In `@frontend/src/components/widgets/list-view/ListView.jsx`:
- Around line 54-55: The badge math unconditionally subtracts one from
item.co_owners_count which causes an off-by-one when the current user isn't the
owner; update the computation of extra so you only subtract 1 when item.is_owner
is true (i.e., keep the existing subtraction for the "Me" case but show the full
co_owners_count when not owner) — adjust the expression that sets extra
(referencing extra, item, co_owners_count, and is_owner) to conditionally
subtract based on item.is_owner.
---
Duplicate comments:
In `@backend/permissions/co_owner_serializers.py`:
- Around line 39-46: The code sets created_by_email from
instance.co_owners.first() which can be nondeterministic; change the logic in
the serializer that builds representation["created_by_email"] to prefer
instance.created_by.email first, and only if instance.created_by is falsy fall
back to instance.co_owners.first().email (or None if neither exists); update the
block referencing instance.co_owners.first(), instance.created_by, and
representation["created_by_email"] accordingly.
- Around line 109-114: The save method in co_owner_serializers.py is race-prone
because it removes a co-owner without a transactional row lock and re-check;
wrap the removal in a transaction.atomic, re-fetch the resource with a SELECT
FOR UPDATE (e.g., via
resource.__class__.objects.select_for_update().get(pk=resource.pk) or similar)
to lock the row, re-check the authoritative set/count of co-owners to ensure
removal won't leave zero owners, raise a ValidationError if it would, otherwise
remove user_to_remove and save the resource, and return it; use the existing
self.context["resource"] and self.context["user_to_remove"] to locate the
resource and user.
In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx`:
- Around line 171-196: The span used as the clickable owner trigger (the element
that calls setSelectedRow(record) and handleCoOwner(record)) needs an explicit
action-oriented accessible name: add an aria-label to the span (e.g., "Manage
co-owners for {record?.created_by_email || 'Unknown'}" or "Manage co-owners for
You" when isOwner) so screen readers announce the intent rather than just the
owner text; keep the existing role="button", tabIndex and keyboard handlers but
ensure the aria-label is computed from record and isOwner so it remains
descriptive and unique per row.
In `@frontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsx`:
- Around line 689-715: Replace the non-semantic interactive <span> used to
trigger owner management with a native <button> to leverage built-in keyboard
and accessibility behavior: update the element that currently uses
role="button", tabIndex, onKeyDown and onClick (around the
setSelectedPorD(record); handleCoOwner(record); calls) to a <button> that
preserves the existing styling and click handler, remove the manual key handling
and role/tabIndex attributes, and ensure the button text generation still uses
isOwner, email?.split("@")[0] (or "Unknown") and the co_owners_count suffix;
keep references to setSelectedPorD and handleCoOwner so the click behavior is
unchanged.
In `@frontend/src/components/widgets/list-view/ListView.jsx`:
- Around line 47-50: handleCoOwnerClick currently always calls
handleCoOwner(event, tool) which breaks single-argument handlers like the ones
in ApiDeployment.jsx and Pipelines.jsx; update handleCoOwnerClick to check the
arity of the passed handler (handleCoOwner) and dispatch accordingly: if
handleCoOwner.length === 1 call handleCoOwner(tool) (so record/tool is passed as
the single arg), otherwise call handleCoOwner(event, tool); reference the
functions handleCoOwnerClick and handleCoOwner and ensure the new branch
preserves event.stopPropagation().
---
Nitpick comments:
In
`@backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.py`:
- Around line 4-21: The migration backfill_creator_to_co_owners currently
iterates all AdapterInstance rows inside a single atomic transaction causing
long locks; modify backfill_creator_to_co_owners to process records in chunks
(e.g., iterate over AdapterInstance ids via values_list or iterator and then
load/update batches of a fixed size like 500–1000) and perform the co_owners.add
logic per batch to avoid holding a long transaction, and set Migration.atomic =
False on the Migration class so the migration is not run in one big transaction;
keep references to AdapterInstance via apps.get_model and ensure existence
checks use filtered batch queries to minimize DB hits.
In `@backend/api_v2/migrations/0005_backfill_creator_to_co_owners.py`:
- Around line 4-21: The migration iterates all APIDeployment rows in a single
transaction and non-chunked loop, causing long locks at scale; modify
backfill_creator_to_co_owners to iterate using deployments =
api_deployment_model.objects.filter(created_by__isnull=False).iterator(chunk_size=1000)
(or another sensible chunk) and update each chunked record as currently done,
and make the Migration class non-atomic by setting atomic = False so
RunPython(backfill_creator_to_co_owners, reverse_code=...) runs outside a single
DB transaction; update the Migration.operations entry (which uses RunPython) and
the Migration class accordingly.
In `@backend/connector_v2/migrations/0007_backfill_creator_to_co_owners.py`:
- Around line 4-21: The migration backfill backfill_creator_to_co_owners
currently iterates all ConnectorInstance rows in one transaction causing long
locks; modify it to set Migration.atomic = False and iterate using
connector_instance_model.objects.filter(created_by__isnull=False).iterator(chunk_size=500)
(or another reasonable chunk_size) inside backfill_creator_to_co_owners, and
keep the reverse_code as migrations.RunPython.noop so the operation is
non-atomic and processes rows in small chunks to reduce transaction scope and
lock time.
In `@backend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.py`:
- Around line 4-21: The migration currently iterates over
pipeline_model.objects.filter(created_by__isnull=False) in an atomic operation
which can lock large tables; update the backfill_creator_to_co_owners function
to iterate using
pipeline_model.objects.filter(created_by__isnull=False).iterator(chunk_size=1000)
(or another sensible chunk_size) and reference pipeline.created_by_id as before,
and set Migration.atomic = False on the Migration class to disable wrapping the
whole operation in a transaction so the backfill runs safely in chunks.
In
`@backend/prompt_studio/prompt_studio_core_v2/migrations/0008_backfill_creator_to_co_owners.py`:
- Around line 4-21: The migration currently loops all CustomTool rows in one
atomic transaction; update backfill_creator_to_co_owners to stream results by
using custom_tool_model.objects.filter(created_by__isnull=False).iterator() so
rows are fetched in chunks and ensure each tool is processed without loading the
whole queryset, and mark the migration non-atomic by setting Migration.atomic =
False (referencing the Migration class in the file) so the RunPython backfill
runs outside a single long transaction; keep the existing
co_owners.add(tool.created_by) logic but iterate with iterator() to reduce
memory and lock time.
In
`@backend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.py`:
- Around line 4-21: Change the migration to be non-atomic and iterate in
batches: set Migration.atomic = False on the Migration class and modify
backfill_creator_to_co_owners to use chunked queryset iteration (e.g., iterate
workflow_model.objects.filter(created_by__isnull=False).order_by("pk").iterator()
in batches or use slicing with a page size) so you add created_by to co_owners
in chunks (check workflow.co_owners.filter(id=workflow.created_by_id).exists()
and then workflow.co_owners.add(workflow.created_by)) to avoid long locks; keep
the same RunPython(backfill_creator_to_co_owners, reverse_code=...) operation
entry.
In `@frontend/src/components/deployments/api-deployment/ApiDeployment.jsx`:
- Around line 609-620: The CoOwnerManagement props dereference coOwnerData and
selectedRow directly causing transient undefined access; update the JSX to pass
safe defaults: use selectedRow?.id ?? null for resourceId, coOwnerAllUsers ?? []
for allUsers, (coOwnerData?.coOwners ?? []) for coOwners, and
(coOwnerData?.createdBy ?? null) for createdBy (keep existing loading,
onAddCoOwner, onRemoveCoOwner props unchanged) so the component always receives
defined values during initial loads.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (12)
backend/adapter_processor_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/api_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/connector_v2/migrations/0007_backfill_creator_to_co_owners.pybackend/permissions/co_owner_serializers.pybackend/pipeline_v2/migrations/0005_backfill_creator_to_co_owners.pybackend/prompt_studio/prompt_studio_core_v2/migrations/0008_backfill_creator_to_co_owners.pybackend/workflow_manager/workflow_v2/migrations/0021_backfill_creator_to_co_owners.pyfrontend/src/components/deployments/api-deployment/ApiDeployment.jsxfrontend/src/components/pipelines-or-deployments/pipelines/Pipelines.jsxfrontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsxfrontend/src/components/widgets/list-view/ListView.cssfrontend/src/components/widgets/list-view/ListView.jsx
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/src/components/widgets/list-view/ListView.css
- frontend/src/components/widgets/co-owner-management/CoOwnerManagement.jsx



What
co_ownersM2M field is the single source of truth for ownership; creators are always backfilledCoOwnerManagementMixin+ shared serializers for consistent backend behaviorCoOwnerManagementfrontend widget integrated into all resource pagesWhy
How
Backend
co_ownersManyToManyField to 7 models:AdapterInstance,APIDeployment,ConnectorInstance,Pipeline,Workflow,CustomTool,AgenticProjectCoOwnerManagementMixininbackend/permissions/co_owner_views.pywithPOST <pk>/owners/andDELETE <pk>/owners/<uid>/actionsAddCoOwnerSerializer/RemoveCoOwnerSerializerinbackend/permissions/co_owner_serializers.pyIsOwner,IsOwnerOrSharedUser,IsOwnerOrSharedUserOrSharedToOrgpermission classes to recognize co-ownersfor_user()manager updated:Q(created_by=user)→Q(co_owners=user)AuditSerializer.create()auto-adds creator toco_ownersfor any model with the fieldFrontend
CoOwnerManagementcomponent (co-owner-management/CoOwnerManagement.jsx) — reusable modal with searchable user select, co-owner list with deleteListViewwidget enhanced with "Owned By: Me +N" display andhandleCoOwnerpropToolSettings.jsx,ConnectorsPage.jsx,Workflows.jsx,ListOfTools.jsx,Pipelines.jsx,ApiDeployment.jsxaddCoOwner()/removeCoOwner()in deployment, pipeline, and workflow servicesCan this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
created_byfield remains unchanged as an audit field.Database Migrations
adapter_processor_v20004_adapterinstance_co_ownersadapter_processor_v20005_backfill_creator_to_co_ownersapi_v20004_apideployment_co_ownersapi_v20005_backfill_creator_to_co_ownersconnector_v20006_connectorinstance_co_ownersconnector_v20007_backfill_creator_to_co_ownerspipeline_v20004_pipeline_co_ownerspipeline_v20005_backfill_creator_to_co_ownersworkflow_v20020_workflow_co_ownersworkflow_v20021_backfill_creator_to_co_ownersprompt_studio_core_v20007_customtool_co_ownersprompt_studio_core_v20008_backfill_creator_to_co_ownersEnv Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
backend/permissions/tests/test_co_owners.pycovering all permission classes and serializer validationKey Design Decisions
co_ownersis single source of truth —created_byis audit-onlyco_owners— backfill for existing,create()for newChecklist
I have read and understood the Contribution Guidelines.