Linux Cluster Interconnect reduces latency up to 200%.

Press Release Summary:



PathScale InfiniPath(TM) leverages 3 industry standards: HyperTransport, InfiniBand, and AMD64 architecture. Product increases cluster performance, scalability, and throughput to empower users to leverage flexibility of Linux clusters for parallel applications and applications previously run on SMP computers. With MPI latency of 1.5 µsec and bi-directional data rate of 1.8 Gbps, product allows HPC applications to scale to thousands of computing nodes.



Original Press Release:



SC2004: PathScale Introduces World's Lowest-Latency Cluster Interconnect; Standards-Based Technology Breakthrough Reduces Linux Cluster Latency Up to 200 Percent



- PathScale InfiniPath Interconnect Offers Industry's Lowest Latency for Commodity AMD Opteron Processor Based Cluster Nodes -

PITTSBURGH, SC2004 SuperComputing Conference, Nov. 9 -- PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux clusters, announces the PathScale InfiniPath Interconnect. Building on the success of PathScale EKOPath (TM) compilers, PathScale InfiniPath (TM) is the industry's lowest latency Linux cluster interconnect, delivering SMP-class performance to commodity-priced clustered computing. InfiniPath leverages three important industry standards, HyperTransport, InfiniBand and the AMD64 architecture, to maximize performance and make low-latency interconnects more affordable to a broader range of high-performance computing (HPC) users. This announcement is being made at the SC2004 SuperComputing Conference in Pittsburgh.

Both a data sheet and a white paper on InfiniPath are available at http://pathscale.com/infinipath.html

The PathScale InfiniPath Interconnect dramatically increases cluster performance, scalability and throughput to empower HPC users to leverage the flexibility and cost effectiveness of Linux clusters for both parallel applications and applications that previously had been run on large, expensive, proprietary symmetric multiprocessing (SMP) computers. In order to effectively migrate applications from these systems to commodity clusters, an interconnect with similar latencies is required. PathScale delivers the only low-latency Linux cluster interconnect that can support these application requirements.

With MPI latency of 1.5 microseconds and a bi-directional data rate of 1.8 gigabytes per second, PathScale's InfiniPath Interconnect offers the industry's lowest latency with extremely high bandwidth, delivering unmatched application scalability for Linux Clusters. These characteristics combine to improve cluster efficiency and allow HPC applications to effectively scale to thousands of computing nodes. The performance of an InfiniPath-enabled cluster will continue to improve with the speed of the processor -- the faster the processor, the lower the latency of the InfiniPath Adapter and the greater the efficiency of the cluster.

"Sandia's Advanced Simulation and Computing Program applications are very demanding on system interconnect performance and require extremely low latency for scalability," said Douglas Doerfler, Principal Member of Technical Staff at Sandia National Laboratories. "The PathScale EKOPath compilers on our AMD Opteron systems meet these high demands and we look forward to testing the PathScale InfiniPath Interconnect."

Greater efficiency also allows users to move complex computational jobs into larger cluster environments and obtain faster results. Examples of HPC applications that can benefit from increased cluster efficiency include computational fluid dynamics, reservoir simulation, weather forecasting, crash analysis, weapons simulation and molecular modeling. Key enterprise applications such as business intelligence and financial modeling can also realize substantial benefits from PathScale InfiniPath.

Adherence to Industry Standards

Widely adopted standards for cluster system components are used by the PathScale InfiniPath Interconnect. These include the InfiniBand switched fabric architecture, Linux, Message Passing Interface (MPI), HyperTransport and the HTX connector, and the AMD64 processor architecture.

PathScale InfiniPath connects AMD Opteron (TM) processor-based clusters together via external InfiniBand 4X switches. InfiniPath has been tested with the leading InfiniBand switch suppliers -- including Topspin, Voltaire, Mellanox and Infinicon. This ensures full interoperability and allows InfiniPath to be fully managed by the subnet management offerings of the leading InfiniBand switch vendors.

The InfiniPath Interconnect attaches directly into the HyperTransport port on the Opteron. The InfiniPath ASIC can be placed directly onto a processor motherboard or implemented as an adapter card that plugs into an industry-standard HyperTransport HTX slot. Compared to existing high-speed interconnects, InfiniPath offers significantly lower latency and higher bandwidth at lower, commodity-like prices.

"The acceptance of AMD Opteron processor-based servers in HPC markets has been phenomenal, and the standards-based, high-performance InfiniPath interconnect will help that trend continue," said Ben Williams, vice president of enterprise and server/workstation business for AMD's Microprocessor Business Unit. "InfiniPath takes full advantage of the AMD Opteron processor's Direct Connect Architecture and HyperTransport technology."

Iwill, a leading manufacturer of motherboards for the AMD Opteron market, recently announced the Iwill DK8-HTX (TM) dual processor AMD Opteron motherboard. Iwill is the industry's first motherboard manufacturer to support the new industry standard HyperTransport HTX slot. The Iwill motherboard is also the first to support the PathScale InfiniPath HTX Adapter.

Initial Partners Already Engaged

PathScale InfiniPath offers significant opportunities for resellers, integrators and OEMs enabling them to further differentiate their products. Many of the leading AMD system OEMs, including Linux Networx, Microway, Angstrom, Appro, GridCore, Dalco, Hard Data and TeamHPC have committed to resell InfiniPath to their HPC customers who require ultra low-latency. InfiniPath can be deployed as either an adapter card or on the motherboard, offering PathScale partners even greater flexibility and potential differentiation. PathScale has worked with AMD system OEMs over the last six months, bidding Pathscale InfiniPath in many large-scale cluster proposals that are planned for deployment in 2005.

"Latency has been the last great barrier to real application scalability on commodity Linux clusters," said Scott Metcalf, CEO of PathScale. "InfiniPath removes that barrier and accelerates the migration away from expensive, large-scale SMP solutions for HPC applications."

Availability

PathScale InfiniPath is being demonstrated at the SC2004 SuperComputing Conference in Pittsburgh, PA this week. InfiniPath can be seen at PathScale booth #1849 running on Microway servers, and in AMD booth #1841 with Iwill DK8-HTX motherboards. InfiniPath will be generally available in the second quarter of 2005 with engineering samples available earlier to select OEMs.

About PathScale

Based in Sunnyvale, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale's technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale's investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit pathscale.com; send email to sales@pathscale.com or telephone 1-408-746-9100.

All Topics