NVIDIA A30 Tensor Core GPU
Versatile compute acceleration for mainstream enterprise servers.
Manufacturer Part Number: 900-21001-0040-000
Specifications:
FP64 |
5.2 teraFLOPS |
FP64 Tensor Core |
10.3 teraFLOPS |
FP32 |
10.3 teraFLOPS |
TF32 Tensor Core |
82 teraFLOPS | 165 teraFLOPS* |
BFLOAT16 Tensor Core |
165 teraFLOPS | 330 teraFLOPS* |
FP16 Tensor Core |
165 teraFLOPS | 330 teraFLOPS* |
INT8 Tensor Core |
330 TOPS | 661 TOPS* |
INT4 Tensor Core |
661 TOPS | 1321 TOPS* |
Media engines |
1 optical flow accelerator (OFA) |
|
1 JPEG decoder (NVJPEG) |
|
4 video decoders (NVDEC) |
GPU memory |
24GB HBM2 |
GPU memory bandwidth |
933GB/s |
|
Interconnect |
PCIe Gen4: 64GB/s |
|
Third-gen NVLINK: 200GB/s** |
Form factor |
Dual-slot, full-height, full-length (FHFL) |
Max thermal design power (TDP) |
165W |
|
Multi-Instance GPU (MIG) |
4 GPU instances @ 6GB each |
|
2 GPU instances @ 12GB each |
|
1 GPU instance @ 24GB |
Virtual GPU (vGPU) software support |
NVIDIA AI Enterprise for VMware |
|
NVIDIA Virtual Compute Server |
* With sparsity
** NVLink Bridge for up to two GPUs
point performance 2.91 Tflops (GPU Boost Clocks)
1.87 Tflops (Base Clocks) 1.66 Tflops (GPU Boost Clocks)
1.43 Tflops (Base Clocks)
Peak single precision floating
point performance 8.74 Tflops (GPU Boost Clocks)
5.6 Tflops (Base Clocks) 5 Tflops (GPU Boost Clocks)
4.29 Tflops (Base Clocks)
Memory bandwidth (ECC off)² 480 GB/sec (240 GB/sec per GPU) 288 GB/sec
Memory size (GDDR5) 24 GB (12GB per GPU) 12 GB
CUDA cores 4992 ( 2496 per GPU) 2880 - See more at: http://www.nvidia.com/object/tesla-servers.html#sthash.ZmsPP43F.dpuf
This product is special order, which takes longer to process and cannot be returned.