Press Release Summary:
- Provides paradigm for achieving both compute flexibility and massive parallelism without synchronization overhead of traditional GPU and CPU architectures
- Support both traditional and new machine learning models, and is currently in operation on both x86 and non-x86 systems
- Designed for the performance requirements of computer vision, machine learning and other AI-related workloads
Original Press Release:
Groq Announces World's First Architecture Capable of 1,000,000,000,000,000 Operations per Second on a Single Chip
- Signals the shift from a transistor game to an architecture game
MOUNTAIN VIEW, California, Nov. 14, 2019 /PRNewswire/ -- Groq – the fast growing start-up, inventor of the Tensor Streaming Processor (TSP) architecture and new class of compute – today announced that its new Tensor Streaming Processor (TSP) architecture is capable of 1 PetaOp/s performance on a single chip implementation. The Groq architecture is the first in the world to achieve this level of performance, which is equivalent to one quadrillion operations per second, or 1e15 ops/s. Groq's architecture is also capable of up to 250 trillion floating-point operations per second (FLOPS).
"We are excited for the industry and our customers," said Jonathan Ross, Groq's co-founder and CEO. "Top GPU companies have been telling customers that they'd hoped to be able to deliver one PetaOp/s performance within the next few years; Groq is announcing it today, and in doing so setting a new performance standard. The Groq architecture is many multiples faster than anything else available for inference, in terms of both low latency and inferences per second. Our customer interactions confirm that. We had first silicon back, first-day power-on, programs running in the first week, sampled to partners and customers in under six weeks, with A0 silicon going into production."
Inspired by a software-first mindset, Groq's TSP architecture provides a new paradigm for achieving both compute flexibility and massive parallelism without synchronization overhead of traditional GPU and CPU architectures. Groq's architecture can support both traditional and new machine learning models, and is currently in operation on customer sites in both x86 and non-x86 systems.
Groq's new, simpler processing architecture is designed specifically for the performance requirements of computer vision, machine learning and other AI-related workloads. Execution planning happens in software, freeing up valuable silicon real estate otherwise dedicated to dynamic instruction execution. The tight control provided by this architecture provides deterministic processing that is especially valuable for applications where safety and accuracy are paramount. Compared to complex traditional architectures based on CPUs, GPUs and FPGAs, Groq's chip also streamlines qualification and deployment, enabling customers to simply and quickly implement scalable, high performance-per-watt systems.
"Groq's solution is ideal for deep learning inference processing for a wide range of applications," said Dennis Abts, Chief Architect at Groq, "but even beyond that massive opportunity, the Groq solution is designed for a broad class of workloads. Its performance, coupled with its simplicity, makes it an ideal platform for any high-performance, data- or compute-intensive workload."
For more information about the new Groq architecture, download the white paper Tensor Streaming Architecture Delivers Unmatched Performance for Compute-Intensive Workloads
Headquartered in Mountain View, CA, Groq delivers industry leading performance, accuracy and sub-millisecond latency with efficient, software-driven solutions for compute-intensive applications. Groq redefines compute by focusing on key technology innovations: software-defined compute, silicon innovation and developer velocity. For more information, visit: https://groq.com/
CONTACT: Sander Arts: firstname.lastname@example.org, +1-408-839-9780, Rowland Harding: email@example.com, +1-347-200-8675