The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server’s internal airflow to cool the GPU.
18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of single-precision performance powers new possibilities in deep learning and HPC workloads.
Compute and data are integrated on the same package using Chip-on-Wafer-on-Substrate with HBM2 technology for greater memory performance over the previous-generation architecture.