Green Node Logo
EN
 
GreenNode
HGX H100
GreenNode
NVIDIA Partner for Compute and AI
 

The NVIDIA HGX H100 Tailored For

Large-scale HPC and AI tasks

NVIDIA H100
GreenNode
SpecificationsH100 SXMH100 PCle
FP6434 TFLOPS26 TFLOPS
FP64 Tensor Core67 TFLOPS51 TFLOPS
F3267 TFLOPS51 TFLOPS
TF32 Tensor Core989 TFLOPS*756 TFLOPS*

BFLOAT16

Tensor

1,979 TFLOPS*1,513 TFLOPS*
FP16 Tensor Core1,979 TFLOPS*1,513 TFLOPS*
FP8 Tensor Core3,958 TFLOPS*3,026 TFLOPS*
INT8 Tensor Core3,958 TFLOPS*3.026 TFLOPS*
GPU memory80GB80GB
GPU memory bandwidth3.35TB/s2TB/s
Decoders

7 NVDEC

7 JPEG

7 NVDEC

7 JPEG

Max thermal design power (TDP)Up to 700W (configurable)300-350W (configurable)
Multi-Instance GPUsUp to 7 MIGS@10GB each
Form factorSXMPCIe dual-slotair-cooled
InterconnectNVLink: 900GB/s PCIe Gen5: 128GB/sNVLink: 600GB/s PCIe Gen5: 128GB/s
Server optionsNVIDIA HGXTM H100 partner and NVIDIA- Certified SystemsTM with 4or8GPUs NVIDIA DGXTM H100 with 8 GPUsPartner and NVIDIA- Certified Systems with 1–8 GPUs
NVIDIA AI EnterpriseAdd-onIncluded
 
 

Why HGX H100?

GreenNode

Fast, flexible infrastructure

GreenNode offers a Kubernetes-native cloud for optimal performance, eliminating infrastructure overhead and handling heavy Kubernetes tasks for seamless workloads.
GreenNode

HGX H100 for AI inference

Enjoy highly configurable compute options with responsive auto-scaling, tailoring AI inference workloads with customizable configurations for optimal scaling and cost efficiency.
GreenNode

Easy workload migration

GreenNode is optimized for NVIDIA GPU accelerated workloads, allowing effortless migration of existing workloads with minimal changes, whether using SLURM or container-forward solutions.
GreenNode

Designed for AI model training

Scale up your model training with GreenNode's modern infrastructure. Purpose-built for AI/ML and HPC challenges, it delivers performance and cost savings through bare-metal Kubernetes, high-capacity data center networks, and more.
GreenNode

Superior networking architecture

Benefit from GreenNode's HGX H100 distributed training clusters featuring a rail-optimized design with NVIDIA Quantum-2 InfiniBand. This provides 3.35Tbps of GPUDirect bandwidth per node for unparalleled networking performance.
GreenNode

Deployment support and expertise

GreenNode simplifies on-prem deployments with everything needed out of the box for optimized distributed training at scale. Leveraging industry-leading tools like Determined.AI and SLURM, GreenNode provides access to a team of AI engineers at no extra cost for additional support.
 
 

Testimonials

Div Garg, CEO of MultiOn

"Accessing high-demand resources like NVIDIA H100 GPUs was difficult for an AI startup. Partnering with GreenNode has helped us overcome this challenge as we have instant access to H100 GPUs and be able to scale our H100 GPU infrastructure quickly. 

This enables us to continually fine-tune our model and thus enhancing our AI agent product to meet the market demand and adoption. Their flexible payment terms is also crucial in keeping our cost and cashflow manageable. 

Last but not least, we appreciate their speedy technical support when we want to upgrade the Internet bandwidth or need to have a shared filesystem for our dataset."

GreenNode

Khoa Tran, CEO of SongGen.AI

"Partnering with GreenNode has been a pivotal decision for our music generation app.

Their AI engineering team brought a depth of expertise and experience that our small team lacked. Their exceptional technical support and proficiency in infrastructure—especially in multi-node setups—enabled us to halve our training time.

Their contribution has significantly accelerated our development process. Big Thanks to GreenNode team."

GreenNode

Quynh Tran, Country Head, Fuse Vietnam

“GreenNode brought exactly what we needed: deep regional expertise in OCR/IDP, flexible pricing, and a hands-on team that understands the pace and challenges of scaling a startup. Their platform is helping Fuse streamline complex document workflows as we continue to scale across Vietnam.”

GreenNode

Chu Hong Hanh, Head of Innovation Lab, ACB​

"GreenNode's IDP OCR solution is a breakthrough in OCR technology. It not only enables us to automatically extract information from thousands of complex banking forms — in a fully automated manner — but also allows for the automatic classification of documents from lengthy text collections. This significantly reduces the time required for processing banking documents, which are typically very complex, by more than 90%. 

Currently, we rely heavily on the IDP OCR solution for automation activities, with over 150 million document processing instances each year, saving the Bank hundreds of billions of VND."

GreenNode
GreenNode
 

FAQs

The HGX H100 offers up to 7x better efficiency in HPC applications and accelerates AI training by up to 9x. Its rail-optimized design with NVIDIA Quantum-2 InfiniBand ensures efficient in-network collections, providing up to 3.35Tbps of GPUDirect bandwidth per node.

The NVIDIA H100 GPU introduces several key innovations:

  • Fourth-generation Tensor Cores: These cores excel at matrix computations, significantly enhancing the efficiency of a wide range of AI and HPC workloads.
  • Transformer Engine: The H100 GPU incorporates a new Transformer Engine, delivering remarkable speed improvements. It can achieve up to 9x faster AI training and up to 30x faster AI inference compared to the prior-generation A100 GPU, particularly benefiting large language models.
  • NVLink Network Interconnect: The GPU features a new NVLink Network interconnect, enabling seamless GPU-to-GPU communication. It can connect up to 256 GPUs across multiple compute nodes, facilitating efficient data exchange and parallel processing.
  • Secure MIG (Multi-Instance GPU): Secure MIG partitions the GPU into isolated instances, optimizing Quality of Service (QoS) for smaller workloads. This ensures that different tasks running on the GPU do not interfere with each other, enhancing overall performance and security.

Yes, GreenNode is optimized for NVIDIA GPU accelerated workloads out-of-the-box, allowing for easy migration of existing workloads with minimal to no changes. Whether you use SLURM, Determine.AI, Pachyderm or container-forward solutions, we provide solutions for a seamless transition.

The HGX H100 is available in SXM, PCIe, and NVL1 form factors. Currently, GreenNode is providing SXM and PCIe dual-slot air-cooled form factors.

The HGX H100 provides 80GB of GPU memory, consistent across both H100 SXM and H100 PCIe form factors.

The GPU memory bandwidth varies: 3.35TB/s for H100 SXM and 2TB/s for H100 PCIe.

Yes, the TDP for the HGX H100 is configurable, with a maximum of up to 700W, while for H100 PCIe, it ranges between 300-350W.

The HGX H100 supports NVLink with 900GB/s and PCIe Gen5 with 128GB/s interconnect options.

01
02
Related Blogs
blogs.noPostsFound
As AI adoption becomes commonplace in enterprises, the demand for comprehensive, AI-ready infrastructure is on the rise, propelling organizations into a new era of innovation and efficiency.