top of page

Glossary List

2024-10-09T11:28:18Z

GPU Hosting

The use of graphics processing units (GPUs) in a hosted environment to run large-scale AI computations and high-performance computing tasks.

Green Data Center

A data center designed to minimize environmental impact through the use of renewable energy sources, efficient cooling technologies, and sustainable infrastructure.

AI Data Center

A data center specifically optimized to support artificial intelligence workloads, including training and inference. These centers house GPUs or other AI-specific hardware for intensive computing tasks.

Training

Training in AI involves feeding data to a model and adjusting its parameters to learn patterns, enabling it to make accurate predictions or decisions on new data.

Tier 3

A Tier-3 data center offers 99.982% uptime, with redundant systems for power and cooling, allowing maintenance without shutting down operations for high availability.

LLM

LLM (Large Language Model) is an AI model trained on vast text data, capable of understanding, generating, and predicting human-like text in various languages and contexts.

2024-10-19T13:22:47Z

East-West Data Traffic

East-West traffic refers to data that moves laterally within a data center or corporate network. This includes server-to-server communications, data replication, backups, and inter-process communications.

DLC

DLC (Direct Liquid to Chip Cooling) uses liquid coolant directly on processors, efficiently removing heat and improving cooling performance in high-density data centers.

PUE

PUE (Power Usage Effectiveness) measures a data center’s energy efficiency by comparing total energy use to energy used by IT equipment. Lower PUE means higher efficiency.

HPC

HPC (High-Performance Computing) uses powerful servers and GPUs in data centers to process complex tasks like simulations, AI training, and big data analysis at high speed.

Inference

Inference in AI is the process where trained models make predictions or decisions on new data, applying learned patterns from the training phase to real-world tasks.

Free Cooling

Free cooling in data centers uses natural cold air or water to reduce energy consumption for cooling, lowering PUE and improving overall energy efficiency.

Checkpointing

Checkpointing in AI training saves model states at intervals, allowing progress recovery after interruptions, reducing time lost from system failures or crashes.

GPU Cloud

GPU Cloud provides remote access to powerful GPUs for AI, ML, and HPC tasks, enabling scalable, high-performance computing without on-premise hardware.

RDx

RDx (Rear Door Cooling) uses a heat exchanger on the back of server racks to cool equipment, reducing the need for traditional air conditioning and improving efficiency.

AI Factory

An AI Factory is a data center optimized for AI workloads, using high-performance GPUs, efficient cooling, and scalable infrastructure for AI training and inference tasks.

2024-10-16T20:26:43Z

Multinode AI Workload

A multinode AI workload distributes AI computational tasks across multiple computing nodes or servers to improve performance, scalability, and efficiency.

2024-10-19T13:22:47Z

North-South Data Traffic

North-South traffic refers to data that moves between an internal network and external networks. This includes traffic that flows from clients (such as users or external applications) to servers in a data center and vice versa.

bottom of page