QLogic QLE7342-CK InfiniBand Adapter PCIe Dual-Port 2xQDR InfiniBand 40Gbps IBTA Low-Profile
QLogic QLE7342-CK InfiniBand Adapter PCIe Dual-Port 2xQDR InfiniBand 40Gbps IBTA Low-Profile
Product Code: NW-QL-QLE7342CK
Manufacturer: QLogic
 On Sale 
List Price: $1,615 / Each
Your Price: $512 / Each
  • Features
  • Reviews

QLogic SAN Networking Products
PCIe Dual-Port InfiniBand Adapter QLE7342

Mfg. Part Number:
QLogic QLE7342-CK, retail kit.
RoHS compliant

Features & Benefits:
High Performance Computing (HPC) solutions have used InfiniBand networks to meet the needs of the most demanding set of applications and grand challenges. The QLE7342 is a dual-port 40Gbps (QDR) "InfiniBand to PCI-Express®" host bus Adapter  adapter. It is a highly integrated design that delivers unprecedented levels of performance, making it the ideal solution for HPC applications that rely on low latency, direct memory access.
Quad data rate (QDR) InfiniBand delivers 40Gbps per port (4×10Gbps), providing the necessary bandwidth for high-throughput applications. With the highest message rate and lowest latency of any InfiniBand adapter, the QLE7342 provides superior HPC application performance.
The QLE7342’s advanced design does not need onboard firmware or external memory, which enhances not only its performance, but also reliability. The ASIC has ECC protection on all internal SRAMs and parity checking on all internal buses. Equally important, the stateless design is inherently more resilient to adapter and fabric failures as it minimizes its reliance on the connection state. Optional data scrambling provides a mechanism to optimize data patterns, which in turn minimizes the bit-error rate.
•  40Gbps InfiniBand interface
•  3400MBps unidirectional throughput
•  30M messages processed per second (noncoalesced)
•  1.0 microsecond latency that remains low as the fabric is scaled
•  Multiple virtual lanes (VLs) for unique Quality of Service (QoS) levels per lane over the same physical port
•  TrueScale™ architecture, with MSI-X interrupt handling, is optimized for multi-core compute nodes
•  Operates without external memory
•  Optional data scrambling in InfiniBand link
•  Complies with InfiniBand Trade Association (IBTA) v1.2 standard
•  Supports OpenFabrics Alliance software distributions

Specifications:
•    Host Bus Interface Specifications
Bus interface:
PCI Express Gen2 x8
Device type:
End point
Advanced interrupts:
MSI-X, INTx
Compliance:
PCI Hot Plug Specification revision 1.0
PCI Bus Power Management Interface Specification revision 1.2
IBTA version 1.2
•    InfiniBand Interfaces and Specifications
Data rate:
40/20/10Gbps auto-negotiation
Virtual lanes:
Configurable for one, two, four, and eight VLs
–  2KB MTU, or
-  4KB MTU (single InfiniBand port)
MTU:
All standard InfiniBand MTUs including 4KB
Interfaces:
Supports quad small form factor pluggable (QSFP) optical and copper cable  specifications 
Supports CX4/microGigaCN specifications
•  Physical Specifications
Ports:
Two QDR 4X InfiniBand
PCI Express Card:
Low profile (4.83" × 2.71")
Brackets:
Standard bracket, 1.84cm × 12.08cm (0.73" × 4.76")
Low profile bracket, 1.84cm × 8.01cm (0.73" × 3.15")
Link status LED indicators
•  Environment and Equipment Specifications
Power consumption:
Typical 6.2W
Temperature:
Operating, 10 to 55° C (estimated)
Storage,  -40 to 70° C (estimated)
Humidity:
Operating: 10% to 95% (estimated)
non-operating: 5% to 100% (estimated)
Heatsink:
None
•  Safety Approvals
US/Canada:
UL; CSA/UL 60590-1; CB Scheme IEC 60950-1
Europe:
TUV: EN60950 2001+A11

Tools and Utilities
•  Host driver/upper level protocol (ULP) support
OpenFabrics Alliance
QLogic SRP
QLogic VNIC
Performance scaled messaging (PSM)
MPI acceleration stack
SHMEM
FastFabric™ tools
•  MPI support
MVAPICH2, MPICH2, Open MPI, QLogic MPI, HPMPI, Platform (Scali) MPI, Intel MPI.
•  Operating systems
Red Hat®
SUSE®
CentOS
Scientific Linux®
 

Login To Add Review
There is no reviews, be first to review!