CUDA
Compute Unified Device Architecture (CUDA) is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (general-purpose computing on graphics processing units). The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.
The CUDA platform is designed to work with programming languages such as C, C++, and Fortran.Â
Usage
Version | module name |
---|---|
10.2.89 | cuda/10.2.89 |
11.0.2 | cuda/11.0.2 |
11.1.0. | cuda/11.1.0 |
11.6.0 | cuda/11.6.0 |
cuda/11.6.0 is the default cuda compiler. however gpu enabled applications are compiled with cuda/10.2.89 that is loaded as part of solgpu or hawkgpu modules.
Available GPUs
There are three types of GPUs available on Sol and Hawk with different compute capabilities (A100 and A40 GPUs will be available by Summer 2021)
GPU | Compute Capability | Compile option |
---|---|---|
GTX 1080 | 6.1 | -gencode arch=compute_61,code=sm_61 |
RTX 2080TI | 7.5 | -gencode arch=compute_75,code=sm_75 |
Tesla T4 | 7.5 | -gencode arch=compute_75,code=sm_75 |
A100 | 8.0 | -gencode arch=compute_80,code=sm_80 |
A40 | 8.6 | -gencode arch=compute_86,code=sm_86 |