Dropping CUDA 11.8 as a default CUDA version
CUDA 11.8 is the last holdover from the old days before conda-forge switched to the new and shiny CUDA 12+ infrastructure, where the CUDA toolchain is provided as native conda-packages, rather than a blob in an image.
For CUDA-enabled feedstocks, we've been building both 11.8 and 12.6 by default for a while now, but many feedstocks (notably pytorch, tensorflow, onnx, jax etc.) have dropped CUDA 11.8 for many months already.
Due to various constraints (details below), we are dropping CUDA 11.8 as a default version in our global pinning on June 5th. It will still be possible to opt into building CUDA 11.8 on a per-feedstock basis where this is necessary or beneficial.
The above-mentioned contraints are mainly:
- it complicates our pinning due to needing to switch images and compilers with 11.8.
- it keeps us from migrating to newer CUDA 12.x versions necessary to support new architectures.
- it's not compatible with VS2022, which is due to become the default toolchain on windows in conda-forge soon (the previous VS2019 has reached end-of-life more than a year ago).
- it complicates our infrastructure in several places, due to the big differences between the before/after of the new CUDA architecture.
After we have removed CUDA 11.8 from the pinning, any feedstock still building that version
will drop the respective CI jobs upon rerendering. For feedstocks wanting to keep building
CUDA 11.8 a bit longer, here's a sample configuration you can put under
recipe/conda_build_config.yaml
(and then rerender).
cuda_compiler:
- None
- nvcc # [linux or win]
- cuda-nvcc # [linux or win]
cuda_compiler_version:
- None
- 11.8 # [linux or win]
- 12.4 # [linux and ppc64le]
- 12.8 # [(linux and not ppc64le) or win]
# use the same 11.8-enabled image for all variants (changes to ubi8,
# down from default alma9, also for non-CUDA and CUDA 12 builds)
docker_image: # [linux]
# CUDA 11.8 builds
- quay.io/condaforge/linux-anvil-x86_64-cuda11.8:ubi8 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
# CUDA 11.8 arch: native compilation (build == target)
- quay.io/condaforge/linux-anvil-aarch64-cuda11.8:ubi8 # [aarch64 and os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le-cuda11.8:ubi8 # [ppc64le and os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
# CUDA 11.8 arch: cross-compilation (build != target)
- quay.io/condaforge/linux-anvil-x86_64-cuda11.8:ubi8 # [aarch64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-x86_64-cuda11.8:ubi8 # [ppc64le and os.environ.get("BUILD_PLATFORM") == "linux-64"]
# CUDA 11.8 is not compatible with current compilers
c_compiler: # [win]
- vs2019 # [win]
cxx_compiler: # [win]
- vs2019 # [win]
c_compiler_version: # [linux]
- 13 # [linux]
- 11 # [linux]
- 12 # [linux and ppc64le]
- 13 # [linux and not ppc64le]
cxx_compiler_version: # [linux]
- 13 # [linux]
- 11 # [linux]
- 12 # [linux and ppc64le]
- 13 # [linux and not ppc64le]
fortran_compiler_version: # [linux]
- 13 # [linux]
- 11 # [linux]
- 12 # [linux and ppc64le]
- 13 # [linux and not ppc64le]
Due to changes in the pinning structure (which keys are zipped together), it's possible that further adaptations are necessary; you can ping conda-forge/core for that. Also, please let us know in the issue if your feedstock still needs to support CUDA 11.8 and why (later down the line we'll want to drop support also in conda-forge-ci-setup, and knowing what feedstocks - if any - still need CUDA 11.8 will help guide the decision on timing).