Skip to content

Conversation

@dengzhhu653
Copy link
Member

What changes were proposed in this pull request?

Why are the changes needed?

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors Hive Metastore Server partition-related logic by extracting multiple “get partitions” code paths out of HMSHandler into a dedicated GetPartitionsHandler, and similarly extracting append-partition logic into AppendPartitionHandler. It also updates various tests to reflect the new exception/message behavior triggered by the refactor.

Changes:

  • Introduce GetPartitionsHandler to consolidate partition-fetching APIs and rewire multiple HMSHandler endpoints to use it.
  • Introduce AppendPartitionHandler and route append-partition requests through the handler framework.
  • Update tests and request argument plumbing (e.g., GetPartitionsArgs.order) to match the refactored behavior.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/handler/GetPartitionsHandler.java New consolidated handler for partition retrieval logic across multiple APIs.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/handler/AppendPartitionHandler.java New handler to encapsulate append-partition logic previously in HMSHandler.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/handler/TAbstractBase.java Adds a minimal TBase implementation used for handler request bodies.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/handler/AbstractRequestHandler.java Adds class-level suppression for rawtypes warnings used by handler registry.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/GetPartitionsArgs.java Extends args with order to support ordered partition-name queries.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/IHMSHandler.java Adds getMetaFilterHook() for handler use.
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java Rewires many partition APIs to delegate to the new handlers; adjusts exception flow and logging.
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestListPartitions.java Updates expected exception type(s) after refactor.
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestMetaStoreEndFunctionListener.java Updates expected listener input table name formatting.
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreMethods.java Updates expected exception type for partitions-by-names negative test.
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java Updates expected exception type and error message substring.
Comments suppressed due to low confidence (1)

standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java:3370

  • In get_partitions_pspec(), the catch block rethrows, but the finally block calls endFunction(..., null, tbl_name) with a null exception. This means endFunction listeners/audit won’t receive the failure cause for this method. Please capture the exception (similar to other HMSHandler methods) and pass it into endFunction.
    } catch (Exception e) {
      throw handleException(e).throwIfInstance(NoSuchObjectException.class, MetaException.class).defaultMetaException();
    }
    finally {
      endFunction("get_partitions_pspec", partitionSpecs != null && !partitionSpecs.isEmpty(), null, tbl_name);
    }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 6 comments.

Comments suppressed due to low confidence (1)

standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java:3370

  • In get_partitions_pspec(), the finally block always passes a null exception to endFunction. If an exception is thrown (and rethrown in the catch), endFunction listeners/metrics won't see the failure cause. Track the caught exception in a local variable (like other methods in this class) and pass it to endFunction.
    } catch (Exception e) {
      throw handleException(e).throwIfInstance(NoSuchObjectException.class, MetaException.class).defaultMetaException();
    }
    finally {
      endFunction("get_partitions_pspec", partitionSpecs != null && !partitionSpecs.isEmpty(), null, tbl_name);
    }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 105 to 109
GetTableRequest getTableRequest = new GetTableRequest(dbName, tblName);
getTableRequest.setCatName(catName);
this.table = handler.get_table_core(getTableRequest);
((HMSHandler) handler).firePreEvent(new PreReadTableEvent(table, handler));
authorizeTableForPartitionMetadata();
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beforeExecute() always loads the full Table via get_table_core() and fires a PreReadTableEvent, even for requests that only need partition names. This introduces an unconditional metastore table lookup (and pre-event overhead) on all get-partitions code paths, which is a behavior/performance change from the previous per-API implementations (some only loaded the table when needed). Consider making table loading lazy and/or conditional (e.g., only when column stats/transformations are requested or when preListeners exist), and performing FilterUtils.checkDbAndTableFilters before any table fetch when possible.

Copilot uses AI. Check for mistakes.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The preListeners should not be empty, it should have the TransactionalValidationListener always.
The old fetching partition names APIs will fire a fireReadTablePreEvent(catName, dbName, tblName);, which will create this Table eventually.
This check will also ensure the target table is exist before executing the request.

// Since partition limit is configured, we need fetch at most (limit + 1) partition names
int max = MetastoreConf.getIntVar(handler.getConf(), MetastoreConf.ConfVars.LIMIT_PARTITION_REQUEST) + 1;
List<String> partNames = rs.listPartitionNames(catName, dbName, tblName, args.getDefaultPartName(),
args.getExpr(), null, max);
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the partition-fetch-limit branch of getPartitionsByExpr(), listPartitionNames is called with a null "order" argument. This ignores GetPartitionsArgs.order and can change the ordering/selection of the partition names compared to the non-limited path (and compared to the fetchPartNames path which passes args.getOrder()). Pass args.getOrder() instead of null here.

Suggested change
args.getExpr(), null, max);
args.getExpr(), args.getOrder(), max);

Copilot uses AI. Check for mistakes.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the args.getOrder() doesn't make sense here

Comment on lines +2679 to 2688
AppendPartitionHandler appendPartition = AbstractRequestHandler.offer(this, appendPartitionsReq);
return appendPartition.getResult().partition();
} catch (Exception e) {
ex = e;
throw handleException(e)
.throwIfInstance(MetaException.class, InvalidObjectException.class, AlreadyExistsException.class)
.defaultMetaException();
} finally {
endFunction("append_partition_req", ret != null, ex, tableName);
endFunction("append_partition_req", ex == null, ex, tableName);
}
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

append_partition_req() reports success to endFunction based solely on (ex == null). If the underlying AppendPartitionHandler completes but returns success=false (e.g., transaction commit fails or addPartition returns false without throwing), endFunction listeners/metrics will incorrectly record this as success. Capture the handler result and pass its success() (or call appendPartition.success()) into endFunction instead.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java:3375

  • get_partitions_pspec() now catches and rethrows exceptions, but the endFunction(...) call in the finally block always passes a null exception. This prevents MetaStoreEndFunctionListener(s) from seeing failures for this API. Capture the caught exception into a local variable and pass it through to endFunction to keep end-function auditing/metrics consistent with other HMS APIs.
    } catch (Exception e) {
      throw handleException(e).throwIfInstance(NoSuchObjectException.class, MetaException.class).defaultMetaException();
    }
    finally {
      endFunction("get_partitions_pspec", partitionSpecs != null && !partitionSpecs.isEmpty(), null, tbl_name);
    }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

if (StringUtils.isBlank(dbName) || StringUtils.isBlank(tableName.getTable())) {
throw e;
}
return Collections.emptyList();
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetch_partition_names_req() returns Collections.emptyList() for any NoSuchObjectException when db/table names are non-blank. With the refactor, NoSuchObjectException can also come from FilterUtils.checkDbAndTableFilters (server-side filtering/authorization), so this branch can silently convert an auth/filtered-out condition into an empty partition list, changing the previous behavior (which surfaced an exception). Consider rethrowing when the NoSuchObjectException originates from filtering/authorization (e.g., when server filtering is enabled), and only returning empty for the specific "table not found" case / legacy ObjectStore behavior.

Suggested change
return Collections.emptyList();
// For legacy ObjectStore behavior, only return an empty list for the specific "table not found"
// case. For other causes (e.g., server-side filtering/authorization), rethrow so that callers
// still see the exception rather than an empty result.
String message = e.getMessage();
if (message != null && message.contains("table not found")) {
return Collections.emptyList();
}
throw e;

Copilot uses AI. Check for mistakes.
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants