The AXE-6120IL is a 1U, 420 mm short-depth AI GPU server powered by a single Intel® Xeon® D-1700 processor. It offers one PCIe Gen3 x16 and one Gen3 x8 slot for FHFL GPUs, dual M.2 NVMe/SATA SSD bays, six smart-fan speed modules, and redundant slim power supplies—delivering industry-leading savings in TCO, deployment time, rack space, and data risk for secure, on-premises edge AI inference.
Value Proposition Highlights
- Lower TCO: Reduces costs by ≥20%* with efficient edge inference
- Faster Development: ADLINK EAAP streamlines integration, enabling ≥30%* faster AI deployment
- Space-Saving: Short-depth chassis and flexible I/O design saves up to 50%* of space
- Reduced Risk: Keeps data on-site—no cloud dependency, minimizing risk of leaks
Notably, the AXE-6120IL is NVIDIA-Certified, delivering reliable, compatible, high-performance AI with seamless operation, rapid deployment, and rock-solid edge stability.
Furthermore, the AXE-6120IL accelerates on-edge deployment and application of LLMs—such as DeepSeek, Llama, and ChatGPT—enabling low-latency, secure AI services without cloud dependency.
Target Vertical Applications