Skip to content

[WIP]Feat: round-level device resource reuse for multirun#335

Open
doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
doraemonmj:reuse
Open

[WIP]Feat: round-level device resource reuse for multirun#335
doraemonmj wants to merge 1 commit intohw-native-sys:mainfrom
doraemonmj:reuse

Conversation

@doraemonmj
Copy link
Contributor

Summary

When repeat_rounds > 1, allocate device resources once on the first round, reuse across subsequent rounds, free once at the end
New C APIs: reinit_runtime (re-copy input data to existing device addresses) and finalize_runtime_round (copy results back without freeing)
Only tensormap_and_ringbuffer runtime implements true reuse; other runtimes auto-fallback to full init/finalize per round
Python Runtime.initialize() auto-detects first vs subsequent calls, no caller branching needed

Reused resources

Kernel binary, GM Heap (~1GB), Shared Memory, Orch SO device copy, tensor buffer

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the efficiency of multi-round kernel executions by introducing a robust device resource reuse mechanism. Instead of re-allocating and de-allocating resources for each round, the system now intelligently maintains and re-initializes device memory and other critical components, leading to reduced overhead and improved performance for iterative workloads. This change provides a more streamlined and optimized runtime experience, particularly for scenarios involving repeated computations with varying input data.

Highlights

  • Resource Reuse for Multi-Round Executions: Implemented a mechanism to allocate device resources once for the first round of execution and reuse them across subsequent rounds when repeat_rounds > 1, freeing them only at the very end. This optimizes performance by avoiding redundant resource allocation and deallocation.
  • New C APIs for Runtime Management: Introduced two new C APIs: reinit_runtime for lightweight re-initialization (re-copying input data to existing device addresses) and finalize_runtime_round for copying results back without freeing device resources.
  • Runtime Specific Implementations and Fallbacks: The tensormap_and_ringbuffer runtime now fully supports true resource reuse. Other runtimes (like aicpu_build_graph and host_build_graph) automatically fall back to full initialization and finalization per round if reinit_runtime or finalize_runtime_round are not explicitly supported.
  • Python API Enhancements: The Python Runtime.initialize() method has been updated to automatically detect whether it's the first or a subsequent call, intelligently delegating to the appropriate C API (init_runtime or reinit_runtime) without requiring explicit branching from the caller.
  • Reused Resources: The resources that are now reused across rounds include kernel binaries, GM Heap memory (approximately 1GB), Shared Memory, Orchestration SO device copies, and tensor buffers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a performance optimization for repeated execution rounds by implementing lightweight runtime re-initialization and round-level finalization. This involves moving runtime initialization outside the loop in code_runner.py and conditionally calling finalize_round() for intermediate rounds. New C API functions (reinit_runtime, finalize_runtime_round) and corresponding Python bindings are added, along with an _initialized flag in the Runtime class to manage state. While tensormap_and_ringbuffer runtimes gain full support for these new operations, aicpu_build_graph and host_build_graph runtimes provide stub implementations indicating lack of support. The review comments highlight code duplication in the validate_runtime_round_impl and validate_runtime_impl functions across a2a3 and a5 tensormap_and_ringbuffer runtimes, suggesting extraction into a shared helper function for improved maintainability.

Comment on lines +392 to +440
extern "C" int validate_runtime_round_impl(Runtime *runtime) {
if (runtime == nullptr) {
LOG_ERROR("Runtime pointer is null");
return -1;
}

int rc = 0;
LOG_INFO("=== Round Finalize: Copying Results Back ===");

TensorPair* tensor_pairs = runtime->get_tensor_pairs();
int tensor_pair_count = runtime->get_tensor_pair_count();

void* pto2_sm = runtime->get_pto2_gm_sm_ptr();
uint64_t graph_out_ptr = 0;
uint64_t graph_out_size = 0;

if (pto2_sm != nullptr) {
PTO2SharedMemoryHeader host_header;
int hdr_rc = runtime->host_api.copy_from_device(&host_header, pto2_sm, sizeof(PTO2SharedMemoryHeader));
if (hdr_rc == 0) {
graph_out_ptr = host_header.graph_output_ptr;
graph_out_size = host_header.graph_output_size;
}
}

bool first_output_tensor = true;
for (int i = 0; i < tensor_pair_count; i++) {
const TensorPair& pair = tensor_pairs[i];
if (pair.dev_ptr == nullptr || pair.host_ptr == nullptr) continue;

void* src_ptr = pair.dev_ptr;
size_t copy_size = pair.size;

if (first_output_tensor && graph_out_ptr != 0 && graph_out_size > 0) {
src_ptr = reinterpret_cast<void*>(static_cast<uintptr_t>(graph_out_ptr));
copy_size = static_cast<size_t>(graph_out_size);
first_output_tensor = false;
}

int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, src_ptr, copy_size);
if (copy_rc != 0) {
LOG_ERROR("Round finalize: failed to copy tensor %d from device", i);
rc = copy_rc;
}
}

LOG_INFO("=== Round Finalize Complete ===");
return rc;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for copying results back to the host in validate_runtime_round_impl is nearly identical to the corresponding part of validate_runtime_impl (lines 453-552). This code duplication could be avoided by extracting the common logic into a separate helper function, which would improve maintainability.

Comment on lines +383 to +431
extern "C" int validate_runtime_round_impl(Runtime *runtime) {
if (runtime == nullptr) {
LOG_ERROR("Runtime pointer is null");
return -1;
}

int rc = 0;
LOG_INFO("=== Round Finalize: Copying Results Back ===");

TensorPair* tensor_pairs = runtime->get_tensor_pairs();
int tensor_pair_count = runtime->get_tensor_pair_count();

void* pto2_sm = runtime->get_pto2_gm_sm_ptr();
uint64_t graph_out_ptr = 0;
uint64_t graph_out_size = 0;

if (pto2_sm != nullptr) {
PTO2SharedMemoryHeader host_header;
int hdr_rc = runtime->host_api.copy_from_device(&host_header, pto2_sm, sizeof(PTO2SharedMemoryHeader));
if (hdr_rc == 0) {
graph_out_ptr = host_header.graph_output_ptr;
graph_out_size = host_header.graph_output_size;
}
}

bool first_output_tensor = true;
for (int i = 0; i < tensor_pair_count; i++) {
const TensorPair& pair = tensor_pairs[i];
if (pair.dev_ptr == nullptr || pair.host_ptr == nullptr) continue;

void* src_ptr = pair.dev_ptr;
size_t copy_size = pair.size;

if (first_output_tensor && graph_out_ptr != 0 && graph_out_size > 0) {
src_ptr = reinterpret_cast<void*>(static_cast<uintptr_t>(graph_out_ptr));
copy_size = static_cast<size_t>(graph_out_size);
first_output_tensor = false;
}

int copy_rc = runtime->host_api.copy_from_device(pair.host_ptr, src_ptr, copy_size);
if (copy_rc != 0) {
LOG_ERROR("Round finalize: failed to copy tensor %d from device", i);
rc = copy_rc;
}
}

LOG_INFO("=== Round Finalize Complete ===");
return rc;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the a2a3 version of this file, the logic for copying results back to the host in validate_runtime_round_impl is nearly identical to the corresponding part of validate_runtime_impl (lines 444-543). To improve maintainability, consider extracting this duplicated code into a shared helper function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant