[WIP]Feat: round-level device resource reuse for multirun#335
[WIP]Feat: round-level device resource reuse for multirun#335doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the efficiency of multi-round kernel executions by introducing a robust device resource reuse mechanism. Instead of re-allocating and de-allocating resources for each round, the system now intelligently maintains and re-initializes device memory and other critical components, leading to reduced overhead and improved performance for iterative workloads. This change provides a more streamlined and optimized runtime experience, particularly for scenarios involving repeated computations with varying input data. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces a performance optimization for repeated execution rounds by implementing lightweight runtime re-initialization and round-level finalization. This involves moving runtime initialization outside the loop in code_runner.py and conditionally calling finalize_round() for intermediate rounds. New C API functions (reinit_runtime, finalize_runtime_round) and corresponding Python bindings are added, along with an _initialized flag in the Runtime class to manage state. While tensormap_and_ringbuffer runtimes gain full support for these new operations, aicpu_build_graph and host_build_graph runtimes provide stub implementations indicating lack of support. The review comments highlight code duplication in the validate_runtime_round_impl and validate_runtime_impl functions across a2a3 and a5 tensormap_and_ringbuffer runtimes, suggesting extraction into a shared helper function for improved maintainability.
| extern "C" int validate_runtime_round_impl(Runtime *runtime) { | ||
| if (runtime == nullptr) { | ||
| LOG_ERROR("Runtime pointer is null"); | ||
| return -1; | ||
| } | ||
|
|
||
| int rc = 0; | ||
| LOG_INFO("=== Round Finalize: Copying Results Back ==="); | ||
|
|
||
| TensorPair* tensor_pairs = runtime->get_tensor_pairs(); | ||
| int tensor_pair_count = runtime->get_tensor_pair_count(); | ||
|
|
||
| void* pto2_sm = runtime->get_pto2_gm_sm_ptr(); | ||
| uint64_t graph_out_ptr = 0; | ||
| uint64_t graph_out_size = 0; | ||
|
|
||
| if (pto2_sm != nullptr) { | ||
| PTO2SharedMemoryHeader host_header; | ||
| int hdr_rc = runtime->host_api.copy_from_device(&host_header, pto2_sm, sizeof(PTO2SharedMemoryHeader)); | ||
| if (hdr_rc == 0) { | ||
| graph_out_ptr = host_header.graph_output_ptr; | ||
| graph_out_size = host_header.graph_output_size; | ||
| } | ||
| } | ||
|
|
||
| bool first_output_tensor = true; | ||
| for (int i = 0; i < tensor_pair_count; i++) { | ||
| const TensorPair& pair = tensor_pairs[i]; | ||
| if (pair.dev_ptr == nullptr || pair.host_ptr == nullptr) continue; | ||
|
|
||
| void* src_ptr = pair.dev_ptr; | ||
| size_t copy_size = pair.size; | ||
|
|
||
| if (first_output_tensor && graph_out_ptr != 0 && graph_out_size > 0) { | ||
| src_ptr = reinterpret_cast<void*>(static_cast<uintptr_t>(graph_out_ptr)); | ||
| copy_size = static_cast<size_t>(graph_out_size); | ||
| first_output_tensor = false; | ||
| } | ||
|
|
||
| int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, src_ptr, copy_size); | ||
| if (copy_rc != 0) { | ||
| LOG_ERROR("Round finalize: failed to copy tensor %d from device", i); | ||
| rc = copy_rc; | ||
| } | ||
| } | ||
|
|
||
| LOG_INFO("=== Round Finalize Complete ==="); | ||
| return rc; | ||
| } |
There was a problem hiding this comment.
The logic for copying results back to the host in validate_runtime_round_impl is nearly identical to the corresponding part of validate_runtime_impl (lines 453-552). This code duplication could be avoided by extracting the common logic into a separate helper function, which would improve maintainability.
| extern "C" int validate_runtime_round_impl(Runtime *runtime) { | ||
| if (runtime == nullptr) { | ||
| LOG_ERROR("Runtime pointer is null"); | ||
| return -1; | ||
| } | ||
|
|
||
| int rc = 0; | ||
| LOG_INFO("=== Round Finalize: Copying Results Back ==="); | ||
|
|
||
| TensorPair* tensor_pairs = runtime->get_tensor_pairs(); | ||
| int tensor_pair_count = runtime->get_tensor_pair_count(); | ||
|
|
||
| void* pto2_sm = runtime->get_pto2_gm_sm_ptr(); | ||
| uint64_t graph_out_ptr = 0; | ||
| uint64_t graph_out_size = 0; | ||
|
|
||
| if (pto2_sm != nullptr) { | ||
| PTO2SharedMemoryHeader host_header; | ||
| int hdr_rc = runtime->host_api.copy_from_device(&host_header, pto2_sm, sizeof(PTO2SharedMemoryHeader)); | ||
| if (hdr_rc == 0) { | ||
| graph_out_ptr = host_header.graph_output_ptr; | ||
| graph_out_size = host_header.graph_output_size; | ||
| } | ||
| } | ||
|
|
||
| bool first_output_tensor = true; | ||
| for (int i = 0; i < tensor_pair_count; i++) { | ||
| const TensorPair& pair = tensor_pairs[i]; | ||
| if (pair.dev_ptr == nullptr || pair.host_ptr == nullptr) continue; | ||
|
|
||
| void* src_ptr = pair.dev_ptr; | ||
| size_t copy_size = pair.size; | ||
|
|
||
| if (first_output_tensor && graph_out_ptr != 0 && graph_out_size > 0) { | ||
| src_ptr = reinterpret_cast<void*>(static_cast<uintptr_t>(graph_out_ptr)); | ||
| copy_size = static_cast<size_t>(graph_out_size); | ||
| first_output_tensor = false; | ||
| } | ||
|
|
||
| int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, src_ptr, copy_size); | ||
| if (copy_rc != 0) { | ||
| LOG_ERROR("Round finalize: failed to copy tensor %d from device", i); | ||
| rc = copy_rc; | ||
| } | ||
| } | ||
|
|
||
| LOG_INFO("=== Round Finalize Complete ==="); | ||
| return rc; | ||
| } |
There was a problem hiding this comment.
Similar to the a2a3 version of this file, the logic for copying results back to the host in validate_runtime_round_impl is nearly identical to the corresponding part of validate_runtime_impl (lines 444-543). To improve maintainability, consider extracting this duplicated code into a shared helper function.
Summary
When repeat_rounds > 1, allocate device resources once on the first round, reuse across subsequent rounds, free once at the end
New C APIs: reinit_runtime (re-copy input data to existing device addresses) and finalize_runtime_round (copy results back without freeing)
Only tensormap_and_ringbuffer runtime implements true reuse; other runtimes auto-fallback to full init/finalize per round
Python Runtime.initialize() auto-detects first vs subsequent calls, no caller branching needed
Reused resources
Kernel binary, GM Heap (~1GB), Shared Memory, Orch SO device copy, tensor buffer