Skip to content

Add DELETE /api/v0/pipeline-nodes/:id with transactions, error handli…#21

Open
OmarElhagagy wants to merge 3 commits intopatchwork-body:mainfrom
OmarElhagagy:main
Open

Add DELETE /api/v0/pipeline-nodes/:id with transactions, error handli…#21
OmarElhagagy wants to merge 3 commits intopatchwork-body:mainfrom
OmarElhagagy:main

Conversation

@OmarElhagagy
Copy link

Hi,
I've implemented the DELETE /api/v0/pipeline-nodes/:id endpoint as requested in #20. Here’s what’s included:

  • Functionality: Deletes a pipeline node along with its connections, inputs, and outputs.
  • Transactions: Ensures all deletions are atomic using SQLx transactions.
  • Error Handling: Returns 404 if the node isn’t found and 500 with JSON error details on failure, improving on the previous internal_error approach.
  • Logging: Added tracing::info logs for start and success, aligning with the project’s observability setup.
  • Tests: Included two integration tests (test_delete_pipeline_node_success and test_delete_pipeline_node_not_found) to verify behavior.

Note:

  • Unable to run cargo sqlx prepare to update query cache. Someone with database access needs to run it.

The code is tested locally with a mock database URL, so please adjust the test URL to match your setup. Let me know if there’s anything else to tweak or if I should add authorization checks (I skipped them since they weren’t specified in the issue).

Looking forward to feedback!

@patchwork-body
Copy link
Owner

Hi @OmarElhagagy, thank you for your PR, I will review your work right away, regarding cargo sqlx prepare cmd as described in the Setup section of the README, you will need to start PostgreSQL first, the easiest way to do so will be run the DB in the Docker. Check out docker-compose file, you can comment out services that unrelated to the scope of work you're doing.

Feel free to ask any questions, I know that README could be better :)

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (2)

api/src/routes/api/v0/pipeline_nodes.rs:55

  • [nitpick] The struct name 'ApiError' is ambiguous. It should be renamed to 'PipelineNodeApiError' to be more descriptive.
struct ApiError {

api/src/routes/api/v0/pipeline_nodes.rs:89

  • [nitpick] The error message 'Pipeline node not found' could be more descriptive by including the node ID.
error: "Pipeline node not found".to_string(),

Copy link
Owner

@patchwork-body patchwork-body left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good start, there's a few things that need closer look, I believe there be no problem for you to handle them, thanks in advance!

DatabaseConnection(mut conn): DatabaseConnection,
Path(id): Path<Uuid>,
) -> Result<StatusCode, (StatusCode, Json<ApiError>)> {
info!("Attempting to delete pipeline node: {}", id);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
info!("Attempting to delete pipeline node: {}", id);
info!("Attempting to delete pipeline node: {id}");

let mut tx = conn
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, Json(ApiError::from(e))))?;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use internal_error helper form utils
you can modify this fn if you think we need a better approach but I would prefer to stick into common reusable solution across the codebase

}

// Delete connections
sqlx::query!(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't actually need to manually delete pipeline connections, inputs and outputs bc PostgreSQL will make sure that gone once we delete related pipeline_node, thanks for the ON DELETE CASCADE rule

@OmarElhagagy
Copy link
Author

Hi @patchwork-body, thanks for the feedback! i'll work on them right away

@OmarElhagagy
Copy link
Author

Hi @patchwork-body,

Thanks for the great review and suggestions! I’ve updated the code based on your feedback:

  1. Added use sqlx::Acquire to fix the transaction issue.
  2. Changed logging to info!("... {id}") for consistency.
  3. Simplified the test by removing unused pipeline_id and node_id vars.
  4. Switched to internal_error for error handling to match the codebase.
  5. Removed manual deletions, relying on ON DELETE CASCADE instead.

I also ran cargo sqlx prepare using Docker with the docker-compose file. Let me know if there’s anything else to adjust. appreciate the guidance! If you need any more help on this project, feel free to hit me up anytime!

Copy link
Owner

@patchwork-body patchwork-body left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you Omar, sorry it took so long to provide a feedback, I've been busy this week, wish you all the best!

use axum::{extract::Path, Json};
use hyper::StatusCode;
use serde::{Deserialize, Serialize};
use serde_json::json;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused import 🤔

pub async fn delete(
DatabaseConnection(mut conn): DatabaseConnection,
Path(id): Path<Uuid>,
) -> Result<StatusCode, (StatusCode, Json<ApiError>)> {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the return type will conflict with what internal_error will return

}))
}

#[cfg(test)]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to erase tests for now, we'll deal with that later on
btw, types seems to be broken, you you want to keep them, please double check that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants