Conversation
Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
wdconinc
left a comment
There was a problem hiding this comment.
Looks sensible. Maybe there's a risk this will enable cuda_arch and therefore +cuda for packages where we currently don't explicitly enable +cuda, but would that be a bad thing?
There was a problem hiding this comment.
Pull request overview
This PR centralizes the CUDA compute capability setting for the Spack CUDA-related environments by introducing a shared cuda_arch.yaml include and removing per-spec cuda_arch=75 pins.
Changes:
- Add a new
spack-environment/cuda_arch.yamland include it from CUDA/TensorFlow environments. - Remove explicit
cuda_arch=75from CUDA-enabled specs inspack-environment/cuda/spack.yaml. - Remove explicit
cuda_arch=75from the TensorFlow CUDA spec inspack-environment/tf/spack.yaml.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| spack-environment/tf/spack.yaml | Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on TensorFlow. |
| spack-environment/cuda/spack.yaml | Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on multiple CUDA specs. |
| spack-environment/cuda_arch.yaml | New shared config intended to define CUDA architecture in one place. |
You can also share your feedback on Copilot code review. Take the survey.
| # To target a different GPU architecture, change cuda_arch here. | ||
| packages: | ||
| all: | ||
| variants: cuda_arch=75 |
There was a problem hiding this comment.
packages: all: variants: cuda_arch=75 will try to apply the cuda_arch variant to every package in the environment, including many CPU-only packages that don’t define cuda_arch, which can break concretization. This also conflicts with the established repo pattern in spack-environment/packages.yaml (see the comment about avoiding packages:all:variants and using require:any_of to avoid unsupported variants). Consider constraining the setting to CUDA-enabled specs only (e.g., via a conditional require with when: '+cuda', or by keeping cuda_arch=75 on the specific CUDA specs / per-package requires).
| variants: cuda_arch=75 | |
| require: | |
| - when: '+cuda' | |
| any_of: [cuda_arch=75, '@:'] |
Wait for:
cuda_arch=75was duplicated across 5 package specs incuda/spack.yamlandtf/spack.yaml, making it hard to retarget GPU architecture.Changes
spack-environment/cuda_arch.yaml: Single source of truth for the CUDA architecture target (currentlycuda_arch=75, Compute Capability 7.5 / Turing). Change GPU target here only.cuda/spack.yamlandtf/spack.yaml: Include../cuda_arch.yaml; drop inlinecuda_arch=75fromacts,arrow,celeritas,py-torch, andpy-tensorflowspecs.💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.