Back
{{newsDetailData.type}}
{{newsDetailData.date}}
{{productlineTag.title}} ,
Tokyo, Japan – April 8, 2026 – At Japan IT Week Spring 2026 (Booth #W21-22), MSI presents a comprehensive portfolio of AI and enterprise infrastructure designed to support data-intensive workloads across modern IT environments. From AI training and inference to enterprise applications, MSI enables businesses to deploy right-sized compute with GPU-dense systems, modular NVIDIA MGX platforms, and OCP DC-MHS servers, helping simplify deployment, improve resource utilization, and accelerate AI adoption.
“AI is moving from experimentation into core enterprise infrastructure,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “This shift is changing how organizations design, deploy, and scale IT. MSI is focused on helping customers build infrastructure that can support AI as a long-term capability across their operations.”
Full-Spectrum AI Infrastructure for Every Workload
A full-spectrum AI infrastructure portfolio spans 4U GPU-dense systems, a 2U GPU platform, and a deskside AI workstation, enabling organizations to match compute resources to workloads from large-scale training to inference and local AI development.
The CG480-S5063 and CG290-S3063, built on NVIDIA MGX architecture, leverage a modular and standardized design to simplify system integration and accelerate deployment, helping organizations reduce validation effort and achieve faster time-to-revenue. For customers seeking an alternative to NVIDIA MGX, the G4201(-HE) offers an entry-level enterprise platform with flexible PCIe expansion options for compatibility with existing IT environments, making it suitable for organizations starting to integrate AI and data-intensive workloads. Together with the XpertStation WS300 built on NVIDIA DGX Station architecture, MSI delivers a flexible, multi-architecture portfolio that enables efficient AI scaling across development, deployment, and scaling stages.
The CG480-S5063 4U GPU server, based on NVIDIA MGX architecture and powered by dual Intel® Xeon® 6 processors, supports up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs with high-bandwidth networking, enabling scalable performance for LLM training and large-scale generative AI workloads, while accelerating deployment through a modular MGX design.
The CG290-S3063 2U GPU server, built on NVIDIA MGX architecture with a single Intel Xeon processor 6, supports up to 4 NVIDIA RTX PRO 6000 Blackwell Server Edition/NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs, delivering a compact and efficient platform for inference and distributed AI deployments, especially in space-constrained data centers and edge scenarios.
The XpertStation WS300, built on NVIDIA DGX Station architecture and powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, features 748GB of coherent memory pool and dual 400GbE networking, enabling developers to train, fine-tune, and run advanced AI models locally with data-center-class performance.
The G4201(-HE) 4U server, powered by dual 5th Gen Intel Xeon Scalable processors, supports up to 32 DDR5 DIMM slots and 8 PCIe cards, adopting an industry-standard architecture for compatibility with existing IT environments. It serves as an entry-level enterprise platform for organizations beginning to integrate AI and data-driven applications into established environments.
Modular Enterprise Platforms for Scalable IT Infrastructure
Built on DC-MHS (Data Center Modular Hardware System) with DC-SCM, these MSI enterprise servers are designed for AI-enabled enterprise, virtualization, and cloud workloads. By separating system management from the host, DC-SCM enables faster CPU transitions and reduces firmware development effort, while the modular DC-MHS design allows independent upgrades and shorter validation cycles. Built on Intel Xeon 6 and AMD EPYC™ 9005 processor platforms, with high core density, DDR5 memory and NVMe storage scaling up to 32 DIMMs and 12 drives, these servers handle data-intensive workloads more efficiently, helping organizations shorten deployment cycles, improve system uptime, and scale AI-enabled services within existing IT environments.
The CX270-S5062(-HE) 2U server, powered by dual Intel Xeon 6 processors, supports up to 32 DDR5 DIMMs, 8 U.2 NVMe drives, and GPU expansion capability, enabling balanced performance for virtualization, AI inference, and mixed workloads.
The CX271-S3066(-HE) 2U server, based on a single Intel Xeon 6 processor, supports up to 16 DDR5 DIMMs and 8 U.2 NVMe drives, and GPU expansion capability, delivering balanced compute and storage performance for enterprise and data-driven workloads.
The CX171-S4056 1U server, powered by a single AMD EPYC 9005 processor, supports up to 24 DDR5 DIMMs and 12 U.2 NVMe drives, enabling space-efficient enterprise deployments with high memory capacity.