SST – Smart Sustainable Technologies

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
NVIDIA HGX A100 AI Computing Platform

Call for Price

The NVIDIA A100 80GB PCIe is a next-generation data center GPU built on the groundbreaking Ampere architecture, designed to accelerate AI, high-performance computing (HPC), and data analytics workloads at scale. With 80GB of high-bandwidth HBM2 memory and up to 2 TB/s memory bandwidth, the A100 enables faster training, more powerful inference, and massive throughput across the most demanding applications.

Category :

Category:

SKU :

SKU: P1001 SKU 230

Brand:

Additional information

Avalibility:

In Stock

Payment Methods:

Description

The NVIDIA A100 80GB PCIe is a next-generation data center GPU built on the groundbreaking Ampere architecture, designed to accelerate AI, high-performance computing (HPC), and data analytics workloads at scale. With 80GB of high-bandwidth HBM2 memory and up to 2 TB/s memory bandwidth, the A100 enables faster training, more powerful inference, and massive throughput across the most demanding applications.

Featuring Multi-Instance GPU (MIG) technology, a single A100 GPU can be partitioned into up to seven separate GPU instances, optimizing resource utilization across multiple users or workloads. Whether you’re developing deep learning models, running simulation-based research, or scaling cloud-based services, the A100 PCIe delivers unmatched performance, reliability, and scalability.

Specifications

Product Overview
Architecture NVIDIA Ampere
Use Cases AI Training, AI Inference, HPC, Analytics
GPU Instances (MIG) Up to 7 per GPU
Performance
FP32 (Single‑Precision) 19.5 TFLOPS
FP64 (Double‑Precision) 9.7 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS (312 TFLOPS w/ sparsity)*
BFloat16 312 TFLOPS (624 TFLOPS w/ sparsity)*
FP16 Tensor Core 312 TFLOPS (624 TFLOPS w/ sparsity)*
INT8 Tensor Core 624 TOPS (1,248 TOPS w/ sparsity)*
Memory
GPU Memory (HBM2) 40 GB / 80 GB
Memory Bandwidth 1,555–2,039 GB/s
Interconnect
NVLink / NVLink Bridge 600 GB/s (SXM) / 64 GB/s over PCIe 4.0
PCI Express PCIe Gen 4
Form Factor & Power
Form Factors PCIe dual‑slot / SXM / HGX
Max TDP 300W – 400W (up to 500W in custom SXM CTS)**
Launch & Architecture
Launch Date May 14, 2020
CUDA Compute Capability 8.0