Software extends cluster capabilities on eserver xSeries.

Press Release Summary:



Suited for parallel and serial data access applications on eserver xSeries® clusters of servers, General Parallel File System (GPFS) for Linux delivers scalable file system on open, Intel(TM)-based server platform. It provides support for interoperable environment with GPFS for AIX, hardware and Linux distributions, and UNIX file system interfaces. It also supports concurrent read and writes from multiple nodes and offers failover support to minimize single points of failure.



Original Press Release:



IBM General Parallel File System for Linux on xSeries Extends Your Cluster Capabilities in Tandem with GPFS for AIX



At a glance

Ideal for parallel and serial applications that need to be highly available and require fast, scalable access to large amounts of file data, GPFS for Linux delivers a robust, scalable file system on an open, Intel(TM)-based server platform.

It is designed to provide:

oSupport for an interoperable environment with GPFS for AIX

oSupport for an expanded set of hardware and Linux distributions

oVarious scalability and performance enhancements

oSupport for standard UNIX file system interfaces

oConcurrent read and writes from multiple nodes

oFailover support to minimize single points of failure

For ordering, contact:

Your IBM representative, an IBM Business Partner, or IBM Americas Call Centers at 800-IBM-CALL (Reference: YE001).

Overview

A new version of IBM General Parallel File System (GPFS) for Linux offers several usability enhancements when running on IBM xSeries® clusters of servers, including the Cluster 1350.

GPFS is designed to allow users shared access to files that may span multiple disk drives on multiple nodes. It furnishes many of the standard UNIX® file system interfaces that can allow most applications to execute without modification or recompiling. It is designed to offer access to files from any node in the system and provide high-performance file I/O to parallel jobs running on multiple nodes or to serial applications that are scheduled based upon processor availability.

GPFS for Linux on xSeries in concert with the GPFS for AIX® 5L licensed product may form an interoperable GPFS cluster. Additional enhancements to GPFS include:

oExtended access control list entries for a file or directory

oCreation of a single-node nodeset

oSupport for the Data Management Application Programming Interface (DMAPI)

oAbility to create a logical copy, or "snapshot," of a GPFS file system

oAbility to designate a subset of nodes to be used for the calculation of the node quorum

oNew buffer pool management code designed to enhance overall performance

oSupported scaling limit of 512 nodes

oNote: GPFS for Linux, V1 will be withdrawn from marketing effective April 16, 2004.

Key prerequisites

IBM xSeries servers

One of these Linux distributions:

SuSE Linux Enterprise Server 8

Red Hat Linux 8

Red Hat Linux 9

Red Hat Linux Enterprise Linux 3 (AS, ES, or WS)

Planned availability date

December 19, 2003

Description

GPFS for Linux is designed to provide file system services to parallel and serial applications running in the Red Hat or SuSE Linux operating environment. Serial applications can be dynamically assigned to processors based on utilization and may achieve high-performance access to their data from wherever they run.

Using GPFS to store and retrieve files can allow multiple processes or applications on all nodes in the nodeset simultaneous access to the same file using standard file system calls, while managing a high level of control over all file system operations. It can increase aggregate bandwidth of the file system by spreading reads and writes across multiple disks. Allowing concurrent reads and writes from multiple nodes is key in parallel processing.

The traditional file access modes of read (r), write (w), and execute (e) have been extended to include a fourth access mode - control (c). This can be used to specify who can manage the access control list for a file, other than the file owner and root user.

GPFS provides the ability to create a logical copy, or "snapshot," of an entire GPFS file system at a single point in time. This snapshot is designed to allow a backup or mirror application to run concurrently with user updates and still obtain a consistent copy of the file system as of the time it was created.

GPFS for Linux now supports the Data Management API. A DMAPI client application, such as Tivoli® Storage Manager, would be able to exploit DMAPI support in GPFS to provide file system backup and management features such as hierarchical storage management.

GPFS supports very large file systems. The maximum tested file system size is 75 TB. The maximum theoretical file system size possible for GPFS based on architectural parameters is 1 petabyte. File systems up to the maximum tested size are supported, but support for larger file systems may be provided through a special bid process. Multilevel indirect block support allows file sizes up to the largest tested GPFS file system size (minus space required for metadata). System control of these values allows effective caching of i-nodes and may improve the performance of some applications.

GPFS supports an interoperable environment with GPFS for AIX. It is now possible to mix AIX and Linux nodes within the same cluster and provide access to shared file systems. This can enable existing GPFS for AIX customers to leverage their investment by attaching Linux nodes to the existing AIX clusters, such as SP(TM) systems.

Trademarks

SP is a trademark of International Business Machines Corporation in the United States or other countries or both.

The e-business logo, xSeries, AIX, and Tivoli are registered trademarks of International Business Machines Corporation in the United States or other countries or both.

Intel is a trademark of Intel Corporation.

UNIX is a registered trademark of the Open Company in the United States and other countries.

Other company, product, and service names may be trademarks or service marks of others.

All Topics