Skip to content

Centralize cuda_arch capability definition#202

Open
Copilot wants to merge 4 commits intomasterfrom
copilot/sub-pr-199
Open

Centralize cuda_arch capability definition#202
Copilot wants to merge 4 commits intomasterfrom
copilot/sub-pr-199

Conversation

Copy link
Contributor

Copilot AI commented Mar 12, 2026

Wait for:

cuda_arch=75 was duplicated across 5 package specs in cuda/spack.yaml and tf/spack.yaml, making it hard to retarget GPU architecture.

Changes

  • New spack-environment/cuda_arch.yaml: Single source of truth for the CUDA architecture target (currently cuda_arch=75, Compute Capability 7.5 / Turing). Change GPU target here only.
  • cuda/spack.yaml and tf/spack.yaml: Include ../cuda_arch.yaml; drop inline cuda_arch=75 from acts, arrow, celeritas, py-torch, and py-tensorflow specs.
# spack-environment/cuda_arch.yaml
# Current target: Compute Capability 7.5 (Turing: T4, RTX 2xxx, Quadro RTX)
packages:
  all:
    variants: cuda_arch=75

💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
Copilot AI changed the title [WIP] [WIP] Address feedback on centralizing cuda_arch capability definition Centralize cuda_arch capability definition Mar 12, 2026
Copy link
Contributor

@wdconinc wdconinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible. Maybe there's a risk this will enable cuda_arch and therefore +cuda for packages where we currently don't explicitly enable +cuda, but would that be a bad thing?

Base automatically changed from pr/arrow_cuda to master March 12, 2026 23:12
@wdconinc wdconinc marked this pull request as ready for review March 12, 2026 23:13
Copilot AI review requested due to automatic review settings March 12, 2026 23:13
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR centralizes the CUDA compute capability setting for the Spack CUDA-related environments by introducing a shared cuda_arch.yaml include and removing per-spec cuda_arch=75 pins.

Changes:

  • Add a new spack-environment/cuda_arch.yaml and include it from CUDA/TensorFlow environments.
  • Remove explicit cuda_arch=75 from CUDA-enabled specs in spack-environment/cuda/spack.yaml.
  • Remove explicit cuda_arch=75 from the TensorFlow CUDA spec in spack-environment/tf/spack.yaml.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
spack-environment/tf/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on TensorFlow.
spack-environment/cuda/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on multiple CUDA specs.
spack-environment/cuda_arch.yaml New shared config intended to define CUDA architecture in one place.

You can also share your feedback on Copilot code review. Take the survey.

# To target a different GPU architecture, change cuda_arch here.
packages:
all:
variants: cuda_arch=75
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

packages: all: variants: cuda_arch=75 will try to apply the cuda_arch variant to every package in the environment, including many CPU-only packages that don’t define cuda_arch, which can break concretization. This also conflicts with the established repo pattern in spack-environment/packages.yaml (see the comment about avoiding packages:all:variants and using require:any_of to avoid unsupported variants). Consider constraining the setting to CUDA-enabled specs only (e.g., via a conditional require with when: '+cuda', or by keeping cuda_arch=75 on the specific CUDA specs / per-package requires).

Suggested change
variants: cuda_arch=75
require:
- when: '+cuda'
any_of: [cuda_arch=75, '@:']

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants