SYS-CON MEDIA Authors: Yeshim Deniz, Mat Mathews, PR.com Newswire, David Smith, Tim Crawford

Related Topics: Linux

Linux: Article

Scaling Linux to the Extreme

Superior performance and stability in all environments

Previous notions of limited scalability of Linux were abruptly changed last year by the introduction of the SGI Altix server, which scaled up to 64 processors within a single system image (SSI). Today, large-scale Linux servers with hundreds of processors are being deployed by a variety of businesses, universities, research centers, and governments around the world. NASA Ames Research Center, for example, continues to push the limits even further with their 512-processor system running a single instance of the Linux kernel.

This article examines the challenges in enabling large numbers of processors to work efficiently together to better support Linux system configurations for High- Performance Computing (HPC) environments. We will explain what scaling is, the importance of good hardware design, and the kernel changes that make scaling Linux on systems up to 256 processors and beyond possible. Finally, we will show examples of how these highly scalable Linux systems are being used to solve complex real-world problems more efficiently.

Scaling Within HPC Environments

First, let's examine the issues behind system scalability. The term scaling refers to the ability to add more hardware resources, such as processors or memory, to improve the capacity and performance of a system. There are different strategies used for scaling systems depending on the workload requirements. Enterprise business server workloads, for example, often consist of many individual, unrelated tasks that are typically deployed on systems that are smaller in nature and networked together. HPC workloads, on the other hand, are composed of scientific programs that require a high degree of complex processing, process large amounts of data, and have widely fluctuating resource requirements. Because of their demanding resource requirements, HPC programs are written and parallelized to break complex problems down to enable them to leverage system resources in parallel.

One approach used to solve HPC problems is horizontal scaling. With this approach, a program's threads run across a "cluster" of separate systems, and these threads communicate and exchange data over the network. This strategy can be used for workloads that are embarrassingly parallel, where little communication is required between program threads as they perform their computations. However, when program threads need to interact while working on a common set of data, vertical scaling provides a more efficient and better approach. With vertical scaling, threads run on a large number of CPUs all within one system, enabling processors to communicate more efficiently and to also operate upon and exchange data using global shared memory. Adding more processors to the system enables more threads to run simultaneously, thereby enabling more resources to be applied and shared to solve a problem. Vertical scaling also provides an ideal environment for using an HPC system as a central server to dynamically run different HPC programs at the same time when any one program either doesn't actually need all of the system processors or has its own scaling limitations. Whether greater processing capability for a single HPC program is required, or increasing throughput for several different HPC programs running at once, a properly designed vertically scaled system provides a flexible and superior environment for both the most demanding and the widest range of HPC applications.

Hardware Design and Scalability

Perfect scaling occurs when the number of processors added improves the workload throughput by the same factor. For instance, a four-processor system should theoretically improve processing power fourfold compared to a single processor system. In a multiprocessor system, it is critical to minimize the overhead involved with coordinating among multiple processors and utilizing shared resources. We say, "the system is scaling linearly at 90 percent up to 4 processors" if adding a second processor improves system performance by 1.8X, adding a third processor yields a 2.7X improvement, and adding a fourth processor yields an improvement of 3.6X over a single CPU. As more processors are added to a system, often a point is reached where performance no longer improves or even decreases due to hardware, kernel, or application software limitations. The goal is to improve performance by enabling multiple CPUs to scale as close to perfect as possible, and to the highest possible numbers of CPUs.

One of the keys to obtaining maximum performance is a fast system bus with high bandwidth. The extreme processing power provided by hundreds of high-performance CPUs requires multiple fast paths for handling data between CPUs, caches, memory, and I/O. The system bus found on symmetric multiprocessing systems can quickly become a bottleneck since all traffic from the CPUs uses a single, common bus to access and transfer data. Much higher system performance is available using a non-uniform memory access (NUMA) architecture since CPU accesses to memory within the same node will distribute and reduce the load on the system interconnect (see Figure 1).

A well-designed NUMA system will carefully account for the CPU bus transfer speeds, number of CPUs on any given bus, memory transfer speeds, multiple paths, and other factors to ensure that maximum overall bandwidth can be delivered throughout the system. Drawing an imaginary line through the middle of a system to examine its maximum capacity for transferring data between two halves is called bisectional bandwidth. Figure 2 shows the system bus interconnect for an SGI Altix system designed for overall maximum bisectional bandwidth and performance. In this diagram, each C-brick is a rack-mountable module containing four CPUs and each R-brick is an SGI NUMAlink module used to connect together and make a 128p SGI Altix system.

A computer architecture that is well balanced and built for maximum performance is essential to achieving good system scalability. If the hardware doesn't scale, neither will the Linux kernel or the user's application.

Linux Kernel Scalability

Linux was originally designed for smaller systems. Extending Linux to scale well on large systems involves extending various sizes and tables managed by the kernel, and then optimizing the performance for high-end technical computing. Thanks to the solid design and wide community support, Linux has adapted well to large systems.

SGI kernel engineers found that while they were clearly the first to run Linux on large system configurations of this kind, the Linux community had already done an excellent job reworking and addressing many of the issues related to Linux scalability. The types of changes made by SGI and others within the community include extending resource counters sizes, extending bit-mask sizes, and fixing commands and tools to support more than double-digit CPU numbers. Other changes included adding NUMA tool commands to help manage larger memory sizes more efficiently, increasing the limit on open file descriptors and on file sizes, and reducing boot time console messages generated by each processor, since administrating and troubleshooting would otherwise be unmanageable on systems with large CPU counts.

Once the kernel was modified to accommodate the resources of a larger system, SGI engineers focused on getting Linux to scale and perform well. One way to find scaling problems for a 256-processor system is to turn up the stress knobs while using a much larger configuration, such as a 512-processor system. Problems that otherwise would be difficult to pinpoint become obvious. Developing and testing on these larger configurations enabled the SGI engineering team to find and fix many problems that affect all multiprocessor systems of all sizes. SGI kernel engineers used several large configurations in this manner to run a variety of different HPC applications, benchmarks, and custom tests to identify and diagnose Linux scaling problems. Figure 3 shows an early 512 processor SGI Altix system, ascender, which was used by SGI kernel engineers to find and fix scaling problems.

Such testing uncovered a number of areas to change for improving scalability. For example, some system-wide kernel variables were converted to per-processor variables. This reduces memory contention on shared data such as global kernel performance statistics, since this data could be maintained separately, then combined only when needed for reporting purposes. Other scaling improvements included finding and eliminating high-contention spinlocks, reducing spinlock contention in timer routines, optimizing process scheduling algorithms, changes in the buffer cache to use per-node data structures, improved translation lookaside buffer algorithms, improved parallelism of page fault and out-of-memory handling, and identifying and removing hot cache lines due to false sharing.

Bringing It All Together

A well-designed hardware system combined with the Linux optimizations described here enables hundreds of processors within a system to access, use, and manipulate shared resources in the most efficient manner possible, enabling users' HPC programs to fully exploit the available system resources to do real work. The following three examples demonstrate the dramatic scaling and performance improvements being achieved with Linux on systems with processor counts of 128, 256, and larger.

The first example (see Figure 4) shows how adding processors to a system can dramatically reduce the elapsed time for the bioinformatics HPC application HTC-BLAST (High Throughput Computing - Basic Logical Alignment Search Tool) to process 10,000 queries with 4,111,677 total letters on a human genome database with 545 sequences and 2,866,452,029 total letters. In particular, notice that a system with 128 processors ran 1.77X faster than a system with 64 processors.

The next example (see Figure 5) shows the scaling and performance improvements achieved using a computation fluid dynamics application on an automobile external flow problem with a model size of 100 million cells. In this case the total elapsed time continues to decrease as the system configuration is extended from 64 to 256 processors.

Finally, the third example (see Figure 6) shows scaling results for an OpenMP code called Cart3D, developed and used extensively by the NASA Ames Research Center to study flows for the space shuttle. NASA Ames Research Center, known for pushing the limits of computing in pursuit of fundamental science, achieved almost 90% scaling efficiency while running this HPC code on a 512-processor SGI Altix system. SGI and NASA engineers collaborated to identify and fix many Linux scaling issues to achieve a dramatic new breakthrough on system scalability with Linux. The NASA Ames Research Center's system used for this work is shown in Figure 7.

Summary

The performance and capabilities of Linux for server environments have improved dramatically in just the last year. Scientists and others are now routinely using single-system Linux configurations with hundreds of processors to solve complex problems faster and with greater ease than had been thought possible. Testing and developing on these large configurations have proven invaluable for improving the reliability and performance of Linux on configurations of all sizes. The synergy of these scaling improvements combined with the open development model has enabled the continued advancement of Linux to become the superior operating system choice for delivering performance and stability in all environments.

More Stories By Steve Neuner

Steve Neuner is the engineering director for Linux at SGI and has been working on Linux for the past 5 years. He's been developing operating system software for system hardware manufacturers for the past 20 years.

More Stories By Dan Higgins

Dan Higgins has worked in the computer industry for 26 years in a variety of technical roles. Dan has been with SGI for the past 17 years and currently manages the Linux kernel scalability and RAS (Reliability, Availability and Serviceability) engineering team.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.