| | |
Product | NVIDIA® Tesla™ V100 GPU Computing Accelerator - 16GB HBM2 - SXM2 NVLink | NVIDIA® Tesla™ V100 GPU Computing Accelerator - 32GB HBM2 - SXM2 NVLink |
Action | Select | Select |
Main Specifications |
Product Series |
Tesla P100 | Tesla V100 |
Core Type |
NVIDIA CUDA | NVIDIA CUDA |
Host Interface |
PCI Express 3.0 x16 | PCI Express 3.0 x16 |
GPU Architecture |
Volta | Volta |
Detailed Specifications |
Streaming Processor Cores |
5120 CUDA Cores | 5120 CUDA Cores |
NVIDIA Tensor Cores |
640 Tensor Cores | 640 Tensor Cores |
CoWoS HBM2 Stacked Memory Capacity |
16 GB | 32 GB |
CoWoS HBM2 Stacked Memory Bandwidth |
900 GB/s | 900 GB/s |
Max Memory Bandwidth |
900 GB/s | 900 GB/s |
Peak Half-Precision Performance |
21.2 TeraFLOPS | 21.2 TeraFLOPS |
Peak Single Precision floating point performance (GFLOP) |
15.7 TeraFLOPS | 15.7 TeraFLOPS |
Peak Double Precision floating point performance (GFLOP) |
7.8 TeraFLOPS | 7.8 TeraFLOPS |
Total NVLink Bandwidth |
300 GB/s | 300 GB/s |
Tensor Performance |
125 TeraFLOPS | 125 TeraFLOPS |
Cooling |
Passive | Passive |
Max Graphics Card Power (W) |
300W | 300W |
Action | Select | Select |