Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Compute Unified Device Architecture (CUDA) is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (general-purpose computing on graphics processing units). The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.

The CUDA platform is designed to work with programming languages such as C, C++, and Fortran. 

Version

module

10.2.89

cuda/10.2.89

11.0.2

cuda/11.0.2

11.1.0.

cuda/11.1.0

cuda/11.1.0 is the default cuda compiler, however gpu enabled applications are compiled with cuda/10.2.89 that is loaded as part of solgpu or hawkgpu modules.

Available GPUs

There are three types of GPUs available on Sol and Hawk with different compute capabilities

GPU

Compute Capability

Compile option

GTX 1080

6.1

-gencode arch=compute_61,code=sm_61

RTX 2080TI

7.5

-gencode arch=compute_61,code=sm_75

Tesla T4

7.5

-gencode arch=compute_75,code=sm_75

  • No labels