Skip to content

feat(tensor): add TensorLayout enum and DN layout support#242

Draft
ChaoZheng109 wants to merge 1 commit intohw-native-sys:mainfrom
ChaoZheng109:tensor-layout
Draft

feat(tensor): add TensorLayout enum and DN layout support#242
ChaoZheng109 wants to merge 1 commit intohw-native-sys:mainfrom
ChaoZheng109:tensor-layout

Conversation

@ChaoZheng109
Copy link
Collaborator

@ChaoZheng109 ChaoZheng109 commented Mar 10, 2026

Add TensorLayout enum to distinguish row-major (ND) and column-major (DN)
memory layouts. DN layout swaps the last two dimensions between logical
(shapes) and physical (raw_shapes) storage.

Changes:

  • Add TensorLayout enum (ND=row-major, DN=col-major for last 2 dims)
  • Add layout field to Tensor struct
  • Update constructors and factory functions to accept layout parameter
  • view(): auto-swap last 2 offset dimensions for DN layout
  • make_tensor/make_tensor_external: auto-swap last 2 raw_shapes dims for DN
  • Update documentation with DN layout examples

DN invariant: layout=DN implies raw_shapes has last two dims swapped vs shapes
Example: shapes=[M,N], layout=DN → raw_shapes=[N,M] (column-major)

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a TensorLayout enum to provide explicit control over how logical dimensions map to physical storage for Tensor objects. This feature allows for both normal (ND) and reversed (DN) dimension mappings, with the DN layout automatically adjusting view offsets to match its physical storage. The changes integrate this new layout concept throughout the Tensor struct's initialization, view creation, and debugging output, enhancing flexibility in tensor data representation.

Highlights

  • New TensorLayout Enum: Introduced TensorLayout enum with ND (Normal Dimensions) and DN (Dimensions iN reverse) options to define physical-to-logical dimension mapping.
  • Tensor Struct Update: Added a layout field of type TensorLayout to the Tensor struct to store the tensor's dimension mapping.
  • view() Method Logic: Modified the view() method to automatically reverse view offsets when the tensor's layout is DN, ensuring correct physical storage mapping.
  • Constructor and Factory Function Updates: Updated Tensor constructors and make_tensor_external, make_tensor factory functions to accept and initialize the new TensorLayout parameter, defaulting to ND.
  • Debugging Output: Enhanced the Tensor::to_string method to include the tensor's layout for easier debugging.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/a2a3/runtime/tensormap_and_ringbuffer/runtime/tensor.h
    • Defined TensorLayout enum (ND, DN) for dimension mapping
    • Added layout member to the Tensor struct
    • Modified Tensor constructors to accept an optional TensorLayout parameter
    • Updated Tensor::init method to store the TensorLayout
    • Modified Tensor::view method to reverse view_offsets if layout is DN
    • Updated Tensor::to_string to display the tensor's layout
    • Modified make_tensor_external and make_tensor factory functions to include TensorLayout parameter
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a TensorLayout enum to support different dimension mappings (ND and DN) for handling tensors with varied memory layouts. However, the implementation in the view() method contains a critical stack-based buffer overflow vulnerability due to a lack of bounds checking for ndims against the fixed-size reversed_offsets array. Additionally, the review includes a suggestion to correct a minor typo and a refactoring proposal for the view() method to enhance code readability and maintainability.

@ChaoWao ChaoWao marked this pull request as draft March 11, 2026 02:07
@ChaoZheng109
Copy link
Collaborator Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a TensorLayout enum to support different memory layouts for tensors, specifically for row-major (ND) and a mixed-layout (DN) where the last two dimensions are column-major. While the changes correctly propagate the new layout property and update methods like view() and factory functions make_tensor_external and make_tensor, the implementation introduces several stack-based buffer overflows. These vulnerabilities arise because the code lacks validation for the number of dimensions (ndims) against the fixed-size arrays (RUNTIME_MAX_TENSOR_DIMS) in view, make_tensor_external, and make_tensor. Additionally, there's an opportunity to refactor duplicated and verbose code in the factory functions for improved maintainability and readability.

Add TensorLayout enum to distinguish row-major (ND) and column-major (DN)
memory layouts. DN layout swaps the last two dimensions between logical
(shapes) and physical (raw_shapes) storage.

Changes:
- Add TensorLayout enum (ND=row-major, DN=col-major for last 2 dims)
- Add layout field to Tensor struct
- Update constructors and factory functions to accept layout parameter
- view(): auto-swap last 2 offset dimensions for DN layout
- make_tensor/make_tensor_external: auto-swap last 2 raw_shapes dims for DN
- Update documentation with DN layout examples

DN invariant: layout=DN implies raw_shapes has last two dims swapped vs shapes
Example: shapes=[M,N], layout=DN → raw_shapes=[N,M] (column-major)
@ChaoZheng109 ChaoZheng109 changed the title feat(tensor): add TensorLayout for ND/DN dimension mapping feat(tensor): add TensorLayout enum and DN layout support Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant