Switch Module is designed for on-demand computing.

Press Release Summary:



Suited for eServer BladeCenter(TM), Topspin IB Switch Module delivers connectivity between InfiniBand connected internal BladeCenter server blades, additional BladeCenter chassis, stand-alone servers, and external gateways. Each switch module contains one 10 Gbps 4x port and one 30 Gbps 12x port. Providing connectivity to Ethernet LANs and Fibre Channel SANs, product enables users to centrally manage I/O and storage connectivity for one BladeCenter or entire data center.



Original Press Release:



Topspin IB Switch Module for IBM eServer BladeCenter for High-Performance and on Demand Computing



Overview

The Topspin IB Switch Module for IBM eServer BladeCenter(TM) delivers low-latency, high-bandwidth connectivity (up to 75 Gbps1 full duplex) between InfiniBand connected internal BladeCenter server blades, additional BladeCenter chassis, stand-alone servers, and external gateways for connectivity to Ethernet LANs and Fibre Channel SANs.

For high-performance computing (HPC) customers, InfiniBand is recognized as an industry-standard, high-performance cluster fabric. Together, the Topspin IB Switch Module for IBM eServer® BladeCenter and the Topspin IB Host Channel Adapter Expansion Card for IBM eServer BladeCenter provide a complete low-latency, high-bandwidth interconnect solution for your scientific, technical, or financial BladeCenter cluster applications.

Scaling from one to five BladeCenters can be done by just interconnecting the InfiniBand Switch Modules. For scaling to even larger clusters, while minimizing switch hops and maximizing bisection bandwidth, external Topspin InfiniBand switches can be used to create larger tree, mesh, or even full bisection bandwidth Clos network topologies.

Enterprise data centers often address the growing demands for processing power from bandwidth-hungry and latency-sensitive applications by "scaling out" with clusters of smaller, less expensive servers or blades as an alternative to the classic "scale-up" model with bigger servers. The scale-out model offers economic and flexibility benefits, but historically has been hindered by:

· Performance of legacy server interconnects
· "I/O sprawl" of adapters, cables, and switch ports required to connect every server directly with its LANs and SANs

The BladeCenter along with the Topspin IB Switch Module and the Topspin IB Host Channel Adapter Expansion Card allows enterprise administrators to get the benefits of scale out while reducing the classic hindrances. IBM, using standards-based InfiniBand interconnect technology, delivers a solution that offers low-latency, high-bandwidth performance. Clustered blades can perform as well as a single monolithic server at a fraction of the cost.

You can centrally manage I/O and storage connectivity for a single BladeCenter or an entire data center without touching a cable. The Topspin solution allows one or more external chassis to provide scalable LAN and SAN connectivity for an entire data center. Data Center administrators can:

· Reduce the number of adapters, cables, and switch ports required
· Manage the addition, removal, or allocation of I/O or storage bandwidth centrally
· Adjust I/O connectivity on demand without downtime

With the major obstacles to scale out alleviated, data center administrators gain an elegant, cost-effective solution for HPC, and distributed databases for an on demand computing environment.

Key prerequisites

eServer BladeCenter chassis and processor blade

Planned availability date

December 10, 2004

At a glance

Topspin IB Switch Module features:

· Up to four external 4X (10 Gbps1) InfiniBand ports
· 14 internal 1X (2.5 Gbps) InfiniBand ports
· Up to 75 Gbps of aggregate bandwidth available per switch
· Dual switch configurations that provide additional bandwidth and redundancy

Topspin IB Host Channel Adapter Expansion Card (HCA) features:

· Dual 2.5 Gbps ports
· Server blade connection to one or two installed InfiniBand Switch Modules
· Through external gateways, provides Ethernet and Fibre Channel connectivity for each InfiniBand connected server blade

Other highlights:
· Provides an ideal clustering solution for:
- High-performance computing (HPC)
- Scale-out distributed databases
- On demand computing
· Leverages the low-latency, high-bandwidth characteristics and the Remote Direct Memory Access (RDMA) capabilities of the InfiniBand standard
· Scales out on demand by interconnecting many blades and many BladeCenters with InfiniBand into a high-performance data center
· Consolidates, virtualizes, and shares I/O and storage across an entire BladeCenter or a collection of BladeCenters for cost savings and high availability

Reference information

1 GHz and MHz denote the internal and/or external clock speed of the device only, not application performance. Many factors affect application performance.

All Topics