Platform Computing Announces Support for MapReduce


Company Brings History of Enterprise-Class Distributed Computing to "Big" Data Analytics with Support for Apache Hadoop MapReduce Programming Model

SAN JOSE, Calif., March 29, 2011 - Platform Computing, the leader in cluster, grid and cloud management software, today announced the company is now bringing enterprise-class distributed computing to business analytics applications that process "big" data using the Apache Hadoop MapReduce programming model. Based on more than 18 years of industry leadership in workload management for high performance computing (HPC) applications, Platform Computing's analytics solutions are a natural expansion of the company's distributed computing experience built on the company's core technologies, Platform LSF and Platform Symphony.

By extending enterprise-class capabilities to MapReduce distributed workloads, customers benefit from the ability to scale to thousands of commodity server cores for shared applications. The results include the ability to perform at very high execution rates, offer IT manageability and monitoring while controlling workload policies for multiple lines of business users and applications and obtain built-in, high availability services that ensure quality of service.

Supporting Quotes:

o "Platform Computing has been providing solutions for distributed
computing infrastructures that align well to the MapReduce paradigm,"
said Carl Olofson, Research Vice President, IDC. "Analysis of
unstructured data provides a competitive advantage to companies looking
to understand behaviors and trends. Dynamically defined data can require
very rapid analysis in bulk, and sensor data has volumes that swamp
conventional data centers. Customers need a robust solution to manage
and process their dynamically defined data, their sensor data, and their
unstructured data. MapReduce has proven to be a leading tool for
analyzing this data, but customers need enterprise-class solutions to
ensure manageability and scalability for these environments. Platform is
positioned well to provide distributed workload and enterprise class
middleware to address these challenges."

o "MapReduce is an important technique for handling big data problems,
said Paul Kent, Vice President Platform R&D at Cary, NC-based SAS. "SAS
is looking forward to continuing our enterprise-class partnership with
Platform Computing as we integrate this technique into our Data
Management and Business Analytics software."

o "Many of Platform's customers already use our products to run complex
analytics and other distributed workload services," said Ken Hertzler,
Vice President, Product Management, Platform Computing. "Platform is
perfectly positioned to run enterprise-class distributed workload for
MapReduce applications. Our products are architected from the outset to
service large-scale parallel processing on commodity infrastructures.
The solutions are also designed to work specifically with multiple
distributed file systems, avoiding customer lock-in and offering a
single, compatible, distributed computing workload solution throughout
the enterprise."

Key Points:

o As "big" data has increased, the need for analytics platforms that can
support distributed environments at high reliability, availability,
scale and manageability to perform business analytics in a timely manner
has increased. Today, companies need analytics that can perform at the
speed of business in order to make the best business decisions possible.

o For more than 18 years, Platform Computing's history has been rooted in
providing distributed computing and workload management solutions to
leading enterprise companies and organizations. Platform's core
distributed workload engines, found in Platform LSF and Platform
Symphony, easily lend themselves to the issue of handling "big" data
because they provide the support necessary to access, process and
analyze multiple data types efficiently and quickly at large volume and
to enterprise-class standards.

o Platform Computing offers a distributed analytics platform that is fully
compatible with the Apache Hadoop MapReduce programming model. This
allows current MapReduce applications to easily move to Platform's
distributed computing workload platform while also supporting multiple
distributed file systems.

o Platform Computing's solution also provides enterprise-class
capabilities to deliver scaled-out MapReduce workload distribution.
Designed to support more than 1,000 simultaneous applications,
organizations can dramatically increase server utilization for up to
40,000 cores across all resources resulting in a high return on
investment. Unlike less sophisticated solutions that lack multiple
analytic application support and scalable distributed workload engines,
Platform's distributed workload services are designed for high
scalability, fast performance, and extreme application compatibility
through its low-latency distributed architecture. MapReduce application
workloads can now run with high reliability under powerful central
management, thereby meeting IT SLAs with high reliability and
consistency.

Resources:

o For more information on Platform Computing's MapReduce solutions, visit:
www.platform.com/mapreduce

About Platform Computing

Platform Computing is the leader in cluster, grid and cloud management software - serving more than 2,000 of the world's most demanding organizations. For 18 years, our workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, Fujitsu, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Visit www.platform.com.

SOURCE Platform Computing

CONTACT: North America, Lisa Melsted of Bateman Group, +1-415-503-1818 ext. 15, platform@bateman-group.com; or Europe, Amy Gooch of Hotwire, +44 (0) 20 7608 8354, platform@hotwirepr.com; or Asia Pacific, Lorraine Sutton of Platform Computing, +1-905-948-4247, lorraine@platform.com

Web Site: www.platform.com

All Topics