Adapters |
InfiniBand / VPI Cards |
ConnectX®-6 Single/Dual-Port Adapter supporting 200Gb/s with VPI
ConnectX-6 Virtual Protocol Interconnect® provides two ports of 200Gb/s
for InfiniBand and Ethernet connectivity, sub-600 ns latency and 200 million
messages per second, enabling the highest performance and most flexible
solution for the most demanding data center applications.
|
- HDR 200Gb/s InfiniBand or 200Gb/s Ethernet per port and all lower speeds
- Up to 200M messages/second
- Tag Matching and Rendezvous Offloads
- Adaptive Routing on Reliable Transport
- Burst Buffer Offloads for Background Checkpointing
- NVMe over Fabric (NVMf) Target Offloadss
- Back-End Switch Elimination by Host Chaining
- Enhanced vSwitch / vRouter Offloads
- Flexible Pipeline
- RoCE for Overlay Networks
- PCIe Gen3 and Gen4 Support
- Erasure Coding offload
- T10-DIF Signature Handover
- IBM CAPI v2 support
- Mellanox PeerDirect™ communication acceleration
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
|
Item |
Description
|
ConnectX-6
VPI 200Gb/s Adapter Card |
Supporting up to HDR/200GbE with
Virtual Protocol Interconnect
and Advanced Offloading Technology |
|
ConnectX®-5 Single/Dual-Port Adapter supporting 100Gb/s with VPI
ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s
InfiniBand and Ethernet connectivity, sub-600 ns latency, and very high message
rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest
performance and most flexible solution for the most demanding applications and
markets: Machine Learning, Data Analytics, and more.
|
- EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port and all lower speeds
- Up to 200M messages/second
- Tag Matching and Rendezvous Offloads
- Adaptive Routing on Reliable Transport
- Burst Buffer Offloads for Background Checkpointing
- NVMe over Fabric (NVMf) Target Offloads
- Back-End Switch Elimination by Host Chaining
- Embedded PCIe Switch
- Enhanced vSwitch / vRouter Offloads
- Flexible Pipeline
- RoCE for Overlay Networks
- PCIe Gen 4 Support
- Erasure Coding offload
- T10-DIF Signature Handover
- IBM CAPI v2 support
- Mellanox PeerDirect? communication acceleration
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
|
Part No. |
Description
|
MCX555A-ECAT |
EDR IB (100Gb/s) and 100GbE,
single-port QSFP28, PCIe3.0 x16 |
MCX556A-ECAT |
EDR IB (100Gb/s) and 100GbE,
dual-port QSFP28, PCIe3.0 x16 |
MCX556A-EDAT |
EDR IB (100Gb/s) and 100GbE,
dual-port QSFP28, PCIe4.0 x16 |
|
ConnectX®-4 Single/Dual-Port Adapter supporting 100Gb/s with VPI |
ConnectX-4 adapter cards with Virtual Protocol Interconnect (VPI), supporting
EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the
highest performance and most flexible solution for high-performance, Web
2.0, Cloud, data analytics, database, and storage platforms.
|
- EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port
- 1/10/20/25/40/50/56/100Gb/s speeds
- 150M messages/second
- Single and dual-port options available
- Erasure Coding offload
- T10-DIF Signature Handover
- Virtual Protocol Interconnect (VPI)
- Power8 CAPI support
- CPU offloading of transport operations
- Application offloading
- Mellanox PeerDirect™ communication acceleration
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- Ethernet encapsulation (EoIB)
- ROHS-R6
|
Part No. |
Description |
MCX455A-ECAT |
EDR IB (100Gb/s) and 100GbE, single-port QSFP28, PCIe3.0 x16 |
MCX456A-ECAT |
EDR IB (100Gb/s) and 100GbE, dual-port QSFP28, PCIe3.0 x16 |
MCX455A-FCAT |
FDR IB (56Gb/s) and 40/56GbE, single-port QSFP28, PCIe3.0 x16 |
MCX456A-FCAT |
FDR IB (56Gb/s) and 40/56GbE, dual-port QSFP28, PCIe3.0 x16 |
MCX453A-FCAT |
FDR IB (56Gb/s) and 40/56GbE, single-port QSFP28, PCIe3.0 x8 |
MCX454A-FCAT |
FDR IB (56Gb/s) and 40/56GbE, dual-port QSFP28, PCIe3.0 x8 |
MCX456M-ECAT |
with Multi-Host supporting dual-socket server, EDR IB (100Gb/s) and
100GbE,
dual-port ,
QSFP28, dual PCIe3.0 x8 |
|
|
|
|
|
ConnectX®-3 Pro - Single/Dual-Port Adapter with Virtual Protocol Interconnect |
ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.
|
|
- Virtual Protocol Interconnect
- 1us MPI ping latency
- Up to 56Gb/s InfiniBand or 40 Gigabit
Ethernet per port
- Single- and Dual-Port options available
- PCI Express 3.0 (up to 8GT/s)
- CPU offload of transport operations
- Application offload
- GPU communication acceleration
- Precision Clock Synchronization
- HW Offloads for NVGRE and VXLAN
encapsulated traffic
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- Ethernet encapsulation (EoIB)
- RoHS-R6
|
Part No. |
Description |
Dimensions |
MCX353A-FCCT |
Single VPI FDR/40/56GbE |
14.2cm x 5.3cm |
MCX354A-FCCT |
Dual VPI FDR/40/56GbE |
14.2cm x 6.9cm |
|
|
|
|
ConnectX®-3 Single/Dual-Port Adapter with VPI |
ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallel processing, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation ConnectX-3 with VPI also simplifies system development by serving multiple fabrics with one hardware design. |
|
- Virtual Protocol Interconnect
- 1us MPI ping latency
- Up to 56Gb/s InfiniBand or 40/56 Gigabit
Ethernet per port
- Single- and Dual-Port options available
- PCI Express 3.0 (up to 8GT/s)
- CPU offload of transport operations
- Application offload
- GPU communication acceleration
- Precision Clock Synchronization
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- Ethernet encapsulation (EoIB)
- RoHS-R6 |
Part No. |
Description |
Dimensions |
MCX353A-QCBT |
Single QDR 40Gb/s or 10GbE |
14.2cm x 5.2cm |
MCX354A-QCBT |
Dual QDR 40Gb/s or 10GbE |
14.2cm x 6.9cm |
MCX353A-FCBT |
Single FDR 56Gb/s or 40/56GbE |
14.2cm x 5.2cm |
MCX354A-FCBT |
Dual FDR 56Gb/s or 40/56GbE |
14.2cm x 6.9cm |
|
|
|
|
Connect-IB™ Single/Dual-Port InfiniBand Host Channel Adapter Cards |
Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High- Performance Computing, Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Cente rs and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. |
|
- Greater than 100Gb/s over InfiniBand
- Greater than 130M messages/sec
- 1us MPI ping latency
- PCI Express 3.0 x16
- CPU offload of transport operations
- Application offload
- GPU communication acceleration
- End-to-end internal data protection
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- RoHS-R6
|
|
Part No. |
Description |
PCI Express |
MCB191A-FCAT |
Single FDR 56Gb/s |
3.0 x8 |
MCB192A-FCAT |
Dual FDR 56Gb/s |
3.0 x8 |
MCB193A-FCAT |
Single FDR 56Gb/s |
3.0 x16 |
MCB194A-FCAT |
Dual FDR 56Gb/s |
3.0 x16 |
|
|
|
|
|
|
|
|
超高速I/O鏈結系統 |
|
|