2805 Bowers Ave, Santa Clara, CA 95051 | 408-730-2275
sales@colfax-intl.com
My Colfax  

Colfax CX2265i-NVMe4-S16-X8 2U Rackmount Server

 
 
  • 2x 3rd Gen Intel® Xeon® Scalable Processors
  • 32x DIMMS Support DDR4 RDIMM/LRDIMM and Intel® Optane™ Persistent Memory 200 Series Modules
  • 16x 2.5" Gen4 U.2 NVMe Drive Bays
  • Intel® DSG System Debug Log Advisor (SDLA) New

Hardware Features

  • 2x 3rd Gen Intel® Xeon® Scalable Processors
  • Intel C621A Chipset
  • 32x DIMMS Support DDR4 RDIMM/LRDIMM and Intel® Optane™ Persistent Memory 200 Series Modules
  • 16x 2.5" Gen4 U.2 NVMe Drive Bays
  • 2x M.2 NVMe SSDs
  • Riser Slot #1 supports the following Riser Card options:
    - Three PCIe* slot riser card supporting (one) - FH/FL single-width slot (x16) + (one) - FH/FL single-width slot (x8) + (one) - FH/HL single-width slot (x8)
    - Two PCIe* slot riser card supporting (one) - FH/FL double-width slot (x16) + (one) - FH/HL single-width slot (x16)
    - NVMe* riser card supporting (one) – HL or FL single-width slot (x16) + (two) - x8 PCIe* NVMe* SlimSAS* connectors, each with a re-timer
  • Riser Slot #2 supports the following Riser Card options:
    - Three PCIe* slot riser card supporting (one) - FH/FL single-width slot (x16) + (one) - FH/FL single-width slot (x8) + (one) - FH/HL single-width slot (x8)
    - Two PCIe* slot riser card supporting (one) - FH/FL double-width slot (x16) + (one) - FH/HL single-width slot (x16)
  • Riser Slot #3 supports the following Riser Card options:
    - Two PCIe* slot riser card supporting (two) LP/HL single-width slots (x16)
  • 1x OCP Gen3 4.0 x16 Mezzanine Slot Supports Intel® Ethernet Network Adapter
  • Integrated Video Controller
  • Server Management:
    - Integrated Baseboard Management Controller (BMC)
    - Intelligent Platform Management Interface (IPMI) 2.0 Compliant
    - Redfish* Compliant
    - Support for Intel® Data Center Manager (DCM)
    - Support for Intel® Server Debug and Provisioning Tool (SDPTool)
    - Dedicated RJ45 1 GbE Management Port
    - Light Guided Diagnostics
  • Intel® DSG System Debug Log Advisor (SDLA) New
    - Enables customers to quickly and easily identify and resolve for themselves common server support issues
  • 1x 1300W / 1600W / 2100W AC 80+ Titaniumum Efficiency Power Supply*
    * The system can have up to two power supply modules installed, supporting the following power configurations: 1+0, 1+1 redundant power, and 2+0 combined power
  • Optional Intel® Trusted Platform Module 2.0


Optional Features

Rack Mount Kit Options

  • Value Rack Mount Rail Kit (CYPHALFEXTRAIL):
    - 1U, 2U compatible
    - Tool-less chassis attachment
    - Tools required to attach rails to rack
    - Rack installation front and rear post distance adjustment from 660 mm to 838 mm
    - 560 mm travel distance
    - Half extension from rack
    - Support for front cover removal and fan replacement
    - 31 kg (68.34 lbs.) maximum support weight
    - No Cable Management Arm support
  • Premium Rail Kit with Cable Management Arm (CMA) Support (CYPFULLEXTRAIL):
    - 1U, 2U compatible
    - Tool-less installation
    - Rack installation front and rear post distance adjustment from 623 mm ~ 942 mm
    - 820 mm travel distance
    - Full extension from rack
    - 31 Kgs (68.34 lbs.) maximum supported weight
    - Support for Cable Management Arm AXXCMA2


PCIe Add-in Card Support

The server system supports a variety of riser card options for add-in card support as well as to enhance the base feature set of the system. These riser cards are available as accessory options for the server system. The system provides concurrent support for up to three PCIe riser cards. These riser cards provide concurrent support for up eight PCIe add-in cards.

PCI Express Bifurcation

The server system supports riser cards through riser slots identified as as Riser Slot #1, Riser Slot #2, and Riser Slot #3. The PCIe* data lanes for Riser Slot #1 are supported by CPU 0. The PCIe* data lanes for Riser Slot #2 and Riser Slot #3 are supported by CPU 1. A dual processor configuration is required when using Riser Slot #2 or Riser Slot #3.

The system supports the following PCIe bifurcation:

  • Add-in card slot 1 in 3-Slot PCIe* riser card (iPC – CYP2URISER1STD) for Riser Slot #1:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4
  • Add-in card slot 1 or slot 2 in 2-Slot PCIe* riser card (iPC – CYP2URISER1DBL) for Riser Slot #1:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4
  • Add-in card slot 1 in 3-Slot PCIe* riser card (iPC – CYP2URISER2STD) for Riser Slot #2:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4
  • Add-in card slot 1 or slot 2 in 2-Slot PCIe* riser card (iPC – CYP2URISER2DBL) for Riser Slot #2:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4


Three-Slot PCIe* Riser Card for Riser Slot #1 (iPC – CYP2URISER1STD)

The three-slot PCIe* riser card option supports:

  • One FH/FL single-width add-in card slot (x16 electrical, x16 mechanical)
  • One FH/FL single-width add-in card slot (x8 electrical, x16 mechanical)
  • One FH/HL single-width add-in card slot (x8 electrical, x8 mechanical)

Two-Slot PCIe* Riser Card for Riser Slot #1 (iPC – CYP2URISER1DBL)

The two-slot PCIe* riser card option supports:

  • One FH/FL double-width slot (x16 electrical, x16 mechanical)
  • One FH/HL single-width slot (x16 electrical, x16 mechanical)

PCIe* NVMe* Riser Card for Riser Slot #1 (iPC – CYP2URISER1RTM)

The PCIe* NVMe* riser card option supports:

  • One HL or FL single-width slot (x16 electrical, x16 mechanical)
  • Two x8 PCIe* NVMe* SlimSAS* connectors

Three-Slot PCIe* Riser Card for Riser Slot #2 (iPC – CYP2URISER2STD)

The three-slot PCIe* riser card option supports:

  • One FH/FL single-width add-in card slot (x16 electrical, x16 mechanical)
  • One FH/FL single-width add-in card slot (x8 electrical, x16 mechanical)
  • One FH/HL single-width add-in card slot (x8 electrical, x8 mechanical)

Two-Slot PCIe* Riser Card for Riser Slot #2 (iPC – CYP2URISER2DBL)

The two-slot PCIe* riser card option supports:

  • One FH/FL double-width slot (x16 electrical, x16 mechanical)
  • One FH/HL single-width slot (x16 electrical, x16 mechanical)

Two-Slot PCIe* Riser Card for Riser Slot #3 (iPC – CYP2URISER3STD)

The two-slot PCIe* riser card option supports:

  • Two LP/HL single-width slots (x16 mechanical, x8 electrical)


Intel® Trusted Platform Module (TPM) 2.0

A TPM is a hardware-based security device that addresses the growing concern on boot process integrity and offers better data protection. TPM protects the system start-up process by ensuring it is tamper-free before releasing system control to the operating system. A TPM device provides secured storage to store data, such as security keys and passwords. In addition, a TPM device has encryption and hash functions.

AXXTPMENC8 implements TPM as per TPM PC Client specifications revision 2.0 by the Trusted Computing Group (TCG)



Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries.

System Memory

Overview

The server system supports standard DDR4, RDIMMs and LDRIMMs, and Intel® Optane™ persistent memory 200 series modules. It can be populated with a combination of both DDR4 DRAM DIMMs and Intel® Optane™ persistent memory 200 series modules.

Intel® Optane™ PMem (persistent memory) is an innovative technology that delivers a unique combination of affordable large memory capacity and data persistence (non-volatility). It represents a new class of memory and storage technology architected specifically for data center usage. Intel® Optane™ PMem 200 series enables higher density (capacity per DIMM) DDR4-compatible memory modules with near-DRAM performance and advanced features not found in standard SDRAM. The module supports the following operating modes:

  • Memory mode (MM)
  • App Direct (AD) mode

Intel® Optane™ Persistent Memory 200 Series Module – Memory Mode (MM)
In Memory mode, the standard DDR4 DRAM DIMM acts as a cache for the most frequently accessed data, while Intel® Optane™ persistent memory 200 series modules provide large memory capacity by acting as direct load/store memory. In this mode, applications and operating system are explicitly aware that the Intel® Optane™ persistent memory 200 series is the only type of direct load/store memory in the system. Cache management operations are handled by the integrated memory controller on the Intel® Xeon® Scalable processors. When data is requested from memory, the memory controller first checks the DRAM cache. If the data is present, the response latency is identical to DRAM. If the data is not in the DRAM cache, it is read from the Intel® Optane™ persistent memory 200 series modules with slightly longer latency. The applications with consistent data retrieval patterns that the memory controller can predict, will have a higher cache hit rate. Data is volatile in Memory mode. It will not be saved in the event of power loss. Persistence is enabled in App Direct mode.

Intel® Optane™ Persistent Memory 200 Series Module – App Direct (AD) Mode
In App Direct mode, applications and the operating system are explicitly aware that there are two types of direct load/store memory in the platform. They can direct which type of data read or write is suitable for DRAM DIMM or Intel® Optane™ persistent memory 200 series modules. Operations that require the lowest latency and do not need permanent data storage can be executed on DRAM DIMM, such as database "scratch pads". Data that needs to be made persistent or structures that are very large can be routed to the Intel® Optane™ persistent memory. The App Direct mode must be used to make data persistent in memory. This mode requires an operating system or virtualization environment enabled with a persistent memory-aware file system.

App Direct mode requires both driver and explicit software support. To ensure operating system compatibility, visit https://www.intel.com/content/www/us/en/architecture-and-technology/optanememory.html


Intel® Optane™ Persistent Memory 200 Series Module Rules

All operating modes:

  • Only Intel® Optane™ persistent memory 200 series modules are supported
  • Intel® Optane™ persistent memory 200 series modules of different capacities cannot be mixed within or across processor sockets
  • Memory slots supported by the integrated memory controller 0 (IMC 0) (memory channels A and B) of a given processor must be populated before memory slots on other IMCs
  • For multiple DIMMs per channel:
    • Only one Intel® Optane™ persistent memory 200 series module is supported per memory channel
    • Intel® Optane™ persistent memory 200 series modules are supported in either DIMM slot when mixed with LRDIMM or 3DS-LRDIMM
    • Intel® Optane™ persistent memory 200 series modules are only supported in DIMM slot 2 (black slot) when mixed with RDIMM or 3DS-RDIMM
  • No support for SDRAM SRx8 DIMM that is populated within the same channel as the Intel® Optane™ persistent memory 200 series module in any operating mode
  • Ensure the same DDR4 DIMM type and capacity is used for each DDR4 + Intel® Optane™ persistent memory 200 series module combination

Memory mode:

  • Populate each memory channel with at least one DDR4 to maximize bandwidth
  • Intel® Optane™ persistent memory 200 series modules must be populated symmetrically for each installed processor (corresponding slots populated on either side of the processor)

Memory mode:

  • Minimum of one DDR4 DIMM per IMC (IMC 0, IMC 1, IMC 2 and IMC 3) for each installed processor
  • Minimum of one Intel® Optane™ persistent memory 200 series module for the board
  • Intel® Optane™ persistent memory 200 series modules must be populated symmetrically for each installed processor (corresponding slots populated on either side of the processor)

Notes on Intel® Optane™ persistent memory 200 series module population:

  • For MM, recommended ratio of standard DRAM capacity to Intel® Optane™ persistent memory 200 series module capacity is between 1 GB:4 GB and 1 GB:16 GB
  • For each individual population, rearrangements between channels are allowed as long as the resulting population is consistent with defined memory population rules
  • For each individual population, the same DDR4 DIMM must be used in all slots, as specified by the defined memory population rules

Distributed Asynchronous Object Storage (DAOS) - Revolutionizing High-Performance Storage with Intel® Optane™ Technology

Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel® Optane™ persistent memory and Intel® Optane™ DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI.

Download Solution Brief

Server Management

Overview

The server uses the baseboard management controller (BMC) features of an ASpeed* AST2500 server management processor. The BMC supports multiple system management features including intra-system sensor monitoring, fan speed control, system power management, and system error handling and messaging. It also provides remote platform management capabilities including remote access, monitoring, logging, and alerting features.

In support of system management, the system includes a dedicated management port and support for two system management tiers and optional system management software.

  • Standard management features (Included)
  • Advanced management features (Optional)
  • Intel® Data Center Manager (DCM) support (Optional)

Remote Management Port
The server board includes a dedicated 1 Gb/s RJ45 management port used to access embedded system management features remotely.

Standard System Management Features
The following system management features are supported by default.

  • Virtual KVM over HTML5
  • Integrated BMC Web Console
  • Redfish
  • IPMI 2.0
    • Node Manager
  • Out-of-band BIOS/BMC Update and Configuration
  • System Inventory
  • Autonomous Debug Log

Advanced Management Features
Advanced manageability features are supported over all NIC ports enabled for server manageability. This includes baseboard integrated BMC-shared NICs, which share network bandwidth with the host system, as well as the LAN channel provided by the onboard Intel® Dedicated Server Management NIC.

  • Software Key to enable features
  • Included single system license for Intel® Data Center Manager (Intel® DCM)
    • Intel® Data Center Manager (Intel® DCM) is a software solution that collects and analyzes the real-time health, power, and thermals of a variety of devices in data centers helping you improve the efficiency and uptime.
  • Virtual Media Image Redirection (HTML5 and Java*)
  • Virtual Media over network share and local folder
  • Active Directory support
  • Full system firmware update including drives, memory, and RAID (Tentative Availability Q4 2021)
  • Storage and network device monitoring (Tentative Availability Q4 2021)
  • Out-of-band hardware RAID Management for latest Intel® RAID cards (Tentative Availability Q4 2021)

More Information
Download Integrated BMC Web Console User Guide


Intel® Data Center Manager (Intel® DCM)

Intel® DCM is a solution for out-of-band monitoring and managing the health, power, and thermals of servers and a variety of other types of devices.

What can you do with Intel® DCM?

  • Automate health monitoring
  • Improve system manageability
  • Simplify capacity planning
  • Identify underutilized servers
  • Measure energy use by device
  • Pinpoint power/thermal issues
  • Create power-aware job scheduling tasks
  • Increase rack densities
  • Set power policies and caps
  • Improve data center thermal profile
  • Optimize application power consumption
  • Avoid expensive PDUs and smart power strips

More Information
Download Intel® Data Center Manager Product Brief
Download Intel® Data Center Manager Console User Guide



Intel® DCM Use Cases

Rack Provisioning

Find new ways to increase rack density.

Intelligent Power

Collect real time data without deploying costly redundant infrastructure by replacing intelligent power distribution units.

Disaster Avoidance

With real-time monitoring and management, it's possible to reduce power failures and other disasters.

Equipment Scheduling

Increase your ability to meet workload demands with equipment scheduling, and make your data center do more.

Build Real-Time Thermal Maps

Build real-time thermal maps to avoid the guesswork that leads to undercooling or overcooling.

Ghost Servers

Identify ghost servers, and get data center power usage under control.

Intel® Power Thermal Aware Solution

Identify energy efficiency issues in the data center to avoid service delays and gain savings.

Granular Rack-Level Thermal Monitoring

Enable the Intel® DCM to recognize an out-of-range temperature reading and allow the user to take immediate action.

Granular Server-Level Thermal Monitoring

Get greater granular server-level thermal visibility, so when temperatures rise, it registers with the Intel® DCM.

Predictive Detection of Cooling Anomalies

Predict cooling issues before they happen with a patented algorithm that detects anomalies in time to be resolved before a thermal issue occurs.

Server health Management

Enable server health management with real-time sub-component monitoring, error detection, proactive health management, and server firmware synchronization.

Updating Firmware of Intel® Data Center Blocks

Monitor and update the firmware of data center systems with Intel® DCM.

Intel® Memory Failure Prediction

Through multi-dimensional model and algorithms, DIMM errors are mined at the micro-level to assign health scores and identify future failures in real time.

Technical Specifications

Dimensions (HxWxL) • 2U Rackmount
• 3.42" x 17.56" x 30.3"
• 87mm x 446mm x 770mm
CPU • Dual Socket-P4 LGA4189
• Support for 3rd Gen Intel® Xeon® Scalable Processors
• Max TDP up to 270 W
• UPI links: up to three at 11.2 GT/s (Platinum and Gold families) or up to two at 10.4 GT/s (Silver family)

Note: Supported 3rd Gen Intel® Xeon® Scalable processor SKUs must Not end in (H), (L), (U), or (Q). All other processor SKUs are supported.
Chipset • Intel® C621A
Memory • 32x DIMM slots
- 16 DIMM slots per processor, eight memory channels per processor
- Two DIMMs per channel
• Supports Registered DDR4 (RDIMM), 3DS-RDIMM, Load Reduced DDR4 (LRDIMM), 3DS-LRDIMM
• Intel® Optane™ persistent memory 200 series
• Memory capacity
- Up to 6 TB per processor (processor SKU dependent)
• Memory data transfer rates
- Up to 3200 MT/s at one or two DIMMs per channel (processor SKU dependent)
• DDR4 standard voltage of 1.2V
Riser Support Concurrent support for up to three riser cards with support for up to eight PCIe* add-in cards. In the below description FH = Full Height, FL = Full Length, HL =Half Length, LP = Low Profile

Riser Slot #1:
• Riser Slot #1 supports x32 PCIe* lanes, routed from CPU 0
• PCIe 4.0 support for up to 64 GB/s

Riser Slot #1 supports the following Intel Riser Card option:
• Three PCIe* slot riser card supporting (one) - FH/FL single-width slot (x16 electrical, x16 mechanical) + (one) - FH/FL single-width slot (x8 electrical, x16 mechanical) + (one) - FH/HL single-width slot (x8 electrical, x8 mechanical) iPC – CYP2URISER1STD
• Two PCIe* slot riser card supporting (one) - FH/FL double-width slot (x16 electrical, x16 mechanical) + (one) - FH/HL single-width slot (x16 electrical, x16 mechanical) iPC – CYP2URISER1DBL
• NVMe* riser card supporting (one) – HL or FL single-width slot (x16 electrical, x16 mechanical) + (two) - x8 PCIe* NVMe* SlimSAS* connectors, each with a re-timer. iPC – CYP2URISER1RTM

Riser Slot #2:
• Riser Slot #2 supports X32 PCIe lanes routed from CPU 1
• PCIe 4.0 support for up to 64 GB/s

Riser Slot #2 supports the following Intel Riser Card option:
• Three PCIe* slot riser card supporting (one) - FH/FL single-width slot (x16 electrical, x16 mechanical) + (one) - FH/FL single-width slot (x8 electrical, x16 mechanical) + (one) FH/HL single-width slot (x8 electrical, x8 mechanical) iPC – CYP2URISER2STD
• Two PCIe* slot riser card supporting (one) - FH/FL double-width slot (x16 electrical, x16 mechanical) + (one) - FH/HL single-width slot (x16 electrical, x16 mechanical) iPC – CYP2URISER2DBL

Riser Slot #3:
• Riser Slot #3 supports x16 PCIe* lanes routed from CPU 1
• PCIe 4.0 support for up to 32 GB/s

Riser Slot #3 supports the following Intel Riser Card option:
• Two PCIe* slot riser card supporting (two) LP/HL single-width slots (x16 mechanical, x8 electrical) iPC – CYP2URISER3STD
• NVMe riser card supporting (two) – PCIe NVMe SlimSAS connectors iPC – CYPRISER3RTM
Open Compute Project (OCP) Adapter Support Onboard x16 PCIe 4.0 OCP 3.0 Mezzanine connector (Small Form-Factor) supports the following Intel accessory options:
• Dual port, RJ45, 10/1 GbE - iPC- X710T2LOCPV3
• Quad port, SFP+ DA, 4x 10 GbE - iPC- X710DA4OCPV3
• Dual Port, QSFP28 100/50/25/10 GbE - iPC- E810CQDA2OCPV3
• Dual Port, SFP28 25/10 GbE - iPC-E810XXVDA2OCPV3
PCIe NVMe Support • Support for up to 10 PCIe NVMe Interconnects
- Eight server board SlimSAS connectors, four per processor
- Two M.2 NVMe/SATA connectors
• Additional NVMe support through select Riser Card options (See Riser Card Support)
SATA • 10 x SATA III ports (6 Gb/s, 3 Gb/s and 1.5 Gb/s transfer rates supported)
- Two M.2 connectors – SATA / PCIe
- Two 4-port Mini-SAS HD (SFF-8643) connectors
USB • Three USB 3.0 connectors on the back panel
• One USB 3.0 and one USB 2.0 connector on the front panel
• One USB 2.0 internal Type-A connector
Serial • One external RJ-45 Serial Port A connector on the back panel
• One internal DH-10 Serial Port B header for optional front or rear serial port support. The port follows the DTK pinout specifications
Video • Integrated 2D video controller
• 128MB of DDR4 video memory
• One VGA DB-15 external connector in the back
Server Management • Integrated Baseboard Management Controller
• Intelligent Platform Management Interface (IPMI) 2.0 Compliant
• Redfish* Compliant
• Support for Intel® Data Center Manager (DCM)
• Support for Intel® Server Debug and Provisioning Tool (SDPTool)
• Dedicated RJ45 1 GbE Management Port
• Light Guided Diagnostics
Security Support • Intel® Platform Firmware Resilience (Intel® PFR) technology with an I2C interface
• Intel® Software Guard Extensions (Intel® SGX) • Intel® CBnT – Converged Intel® Boot Guard and Trusted Execution Technology (Intel® TXT) • Intel® Total Memory Encryption (Intel® TME) • Trusted platform module 2.0 – iPC J33567-151 (accessory option)
Storage Bay • 16 x 2.5" SAS/SATA/NVMe* hot swap drive bays
Power Supply The server system can have up to two power supply modules installed, supporting the following power configurations: 1+0, 1+1 redundant power, and 2+0 combined power

• 1 x 1300W / 1600W / 2100W AC power supply
• 80 Plus Titanium
System Fans • Six managed 60 mm hot swap capable system fans
• Integrated fans included with each installed power supply module
BIOS • Unified Extensible Firmware Interface (UEFI)-based BIOS (legacy boot not supported


Block Diagram


Images


$5,009.38
 

  Qty.Colfax CX2265i-NVMe4-S16-X8 2U Rackmount Server, Cost As Configured $5,009.38
Base Platform 1
Front Bezel 1
Management 1
Power Supply 1    1

Set next    items (  ) like this
Power Supply 2    1
Rackmount Kit 1
Cable Management 1
Primary CPU 1
Secondary CPU 1
CPU 1 - Socket 1 of 16    1

Set next    items (  ) like this
CPU 1 - Socket 2 of 16    1
CPU 1 - Socket 3 of 16    1
CPU 1 - Socket 4 of 16    1
CPU 1 - Socket 5 of 16    1
CPU 1 - Socket 6 of 16    1
CPU 1 - Socket 7 of 16    1
CPU 1 - Socket 8 of 16    1
CPU 1 - Socket 9 of 16    1
CPU 1 - Socket 10 of 16    1
CPU 1 - Socket 11 of 16    1
CPU 1 - Socket 12 of 16    1
CPU 1 - Socket 13 of 16    1
CPU 1 - Socket 14 of 16    1
CPU 1 - Socket 15 of 16    1
CPU 1 - Socket 16 of 16    1
CPU 2 - Socket 1 of 16    1

Set next    items (  ) like this
CPU 2 - Socket 2 of 16    1
CPU 2 - Socket 3 of 16    1
CPU 2 - Socket 4 of 16    1
CPU 2 - Socket 5 of 16    1
CPU 2 - Socket 6 of 16    1
CPU 2 - Socket 7 of 16    1
CPU 2 - Socket 8 of 16    1
CPU 2 - Socket 9 of 16    1
CPU 2 - Socket 10 of 16    1
CPU 2 - Socket 11 of 16    1
CPU 2 - Socket 12 of 16    1
CPU 2 - Socket 13 of 16    1
CPU 2 - Socket 14 of 16    1
CPU 2 - Socket 15 of 16    1
CPU 2 - Socket 16 of 16    1
M.2 Drive 1    1

Set next    items (  ) like this
M.2 Drive 2    1
NVM-E Drive 1    1

Set next    items (  ) like this
NVM-E Drive 2    1
NVM-E Drive 3    1
NVM-E Drive 4    1
NVM-E Drive 5    1
NVM-E Drive 6    1
NVM-E Drive 7    1
NVM-E Drive 8    1
NVM-E Drive 9    1
NVM-E Drive 10    1
NVM-E Drive 11    1
NVM-E Drive 12    1
NVM-E Drive 13    1
NVM-E Drive 14    1
NVM-E Drive 15    1
NVM-E Drive 16    1
NVM-E Cable Kit (1-16) 1
Intel VROC 1
OCP 3.0 Networking 1
Riser Card 1 1
Riser Card 2 1
Riser Card 3 1
Infiniband HBA 1
Ethernet HBA 1
TPM Module 1
Operating System SW 1
   Colfax CX2265i-NVMe4-S16-X8 2U Rackmount Server, Cost As Configured $5,009.38

    

$5,009.38
 

  Qty.Colfax CX2265i-NVMe4-OPT-S16-X8 2U Rackmount Server, Cost As Configured $5,009.38
Base Platform 1
Front Bezel 1
Management 1
Power Supply 1    1

Set next    items (  ) like this
Power Supply 2    1
Rackmount Kit 1
Cable Management 1
Primary CPU 1
Secondary CPU 1
CPU 1 IMC0 CHA Slot 1    1

Set next    items (  ) like this
CPU 1 IMC0 CHB Slot 1    1
CPU 1 IMC0 CHC Slot 1    1
CPU 1 IMC0 CHD Slot 1    1
CPU 1 IMC0 CHE Slot 1    1
CPU 1 IMC0 CHF Slot 1    1
CPU 1 IMC0 CHG Slot 1    1
CPU 1 IMC0 CHH Slot 1    1
CPU 1 IMC0 CHA Slot 2    1

Set next    items (  ) like this
CPU 1 IMC0 CHB Slot 2    1
CPU 1 IMC0 CHC Slot 2    1
CPU 1 IMC0 CHD Slot 2    1
CPU 1 IMC0 CHE Slot 2    1
CPU 1 IMC0 CHF Slot 2    1
CPU 1 IMC0 CHG Slot 2    1
CPU 1 IMC0 CHH Slot 2    1
CPU 2 IMC0 CHA Slot 1    1

Set next    items (  ) like this
CPU 2 IMC0 CHB Slot 1    1
CPU 2 IMC0 CHC Slot 1    1
CPU 2 IMC0 CHD Slot 1    1
CPU 2 IMC0 CHE Slot 1    1
CPU 2 IMC0 CHF Slot 1    1
CPU 2 IMC0 CHG Slot 1    1
CPU 2 IMC0 CHH Slot 1    1
CPU 2 IMC0 CHA Slot 2    1

Set next    items (  ) like this
CPU 2 IMC0 CHB Slot 2    1
CPU 2 IMC0 CHC Slot 2    1
CPU 2 IMC0 CHD Slot 2    1
CPU 2 IMC0 CHE Slot 2    1
CPU 2 IMC0 CHF Slot 2    1
CPU 2 IMC0 CHG Slot 2    1
CPU 2 IMC0 CHH Slot 2    1
M.2 Drive 1    1

Set next    items (  ) like this
M.2 Drive 2    1
NVM-E Drive 1    1

Set next    items (  ) like this
NVM-E Drive 2    1
NVM-E Drive 3    1
NVM-E Drive 4    1
NVM-E Drive 5    1
NVM-E Drive 6    1
NVM-E Drive 7    1
NVM-E Drive 8    1
NVM-E Drive 9    1
NVM-E Drive 10    1
NVM-E Drive 11    1
NVM-E Drive 12    1
NVM-E Drive 13    1
NVM-E Drive 14    1
NVM-E Drive 15    1
NVM-E Drive 16    1
NVM-E Cable Kit (1-16) 1
Intel VROC 1
OCP 3.0 Networking 1
Riser Card 1 1
Riser Card 2 1
Riser Card 3 1
Infiniband HBA 1
Ethernet HBA 1
TPM Module 1
Operating System SW 1
   Colfax CX2265i-NVMe4-OPT-S16-X8 2U Rackmount Server, Cost As Configured $5,009.38