|By Steve Neuner, Dan Higgins||
|May 18, 2004 12:00 AM EDT||
Previous notions of limited scalability of Linux were abruptly changed last year by the introduction of the SGI Altix server, which scaled up to 64 processors within a single system image (SSI). Today, large-scale Linux servers with hundreds of processors are being deployed by a variety of businesses, universities, research centers, and governments around the world. NASA Ames Research Center, for example, continues to push the limits even further with their 512-processor system running a single instance of the Linux kernel.
This article examines the challenges in enabling large numbers of processors to work efficiently together to better support Linux system configurations for High- Performance Computing (HPC) environments. We will explain what scaling is, the importance of good hardware design, and the kernel changes that make scaling Linux on systems up to 256 processors and beyond possible. Finally, we will show examples of how these highly scalable Linux systems are being used to solve complex real-world problems more efficiently.
Scaling Within HPC EnvironmentsFirst, let's examine the issues behind system scalability. The term scaling refers to the ability to add more hardware resources, such as processors or memory, to improve the capacity and performance of a system. There are different strategies used for scaling systems depending on the workload requirements. Enterprise business server workloads, for example, often consist of many individual, unrelated tasks that are typically deployed on systems that are smaller in nature and networked together. HPC workloads, on the other hand, are composed of scientific programs that require a high degree of complex processing, process large amounts of data, and have widely fluctuating resource requirements. Because of their demanding resource requirements, HPC programs are written and parallelized to break complex problems down to enable them to leverage system resources in parallel.
One approach used to solve HPC problems is horizontal scaling. With this approach, a program's threads run across a "cluster" of separate systems, and these threads communicate and exchange data over the network. This strategy can be used for workloads that are embarrassingly parallel, where little communication is required between program threads as they perform their computations. However, when program threads need to interact while working on a common set of data, vertical scaling provides a more efficient and better approach. With vertical scaling, threads run on a large number of CPUs all within one system, enabling processors to communicate more efficiently and to also operate upon and exchange data using global shared memory. Adding more processors to the system enables more threads to run simultaneously, thereby enabling more resources to be applied and shared to solve a problem. Vertical scaling also provides an ideal environment for using an HPC system as a central server to dynamically run different HPC programs at the same time when any one program either doesn't actually need all of the system processors or has its own scaling limitations. Whether greater processing capability for a single HPC program is required, or increasing throughput for several different HPC programs running at once, a properly designed vertically scaled system provides a flexible and superior environment for both the most demanding and the widest range of HPC applications.
Hardware Design and ScalabilityPerfect scaling occurs when the number of processors added improves the workload throughput by the same factor. For instance, a four-processor system should theoretically improve processing power fourfold compared to a single processor system. In a multiprocessor system, it is critical to minimize the overhead involved with coordinating among multiple processors and utilizing shared resources. We say, "the system is scaling linearly at 90 percent up to 4 processors" if adding a second processor improves system performance by 1.8X, adding a third processor yields a 2.7X improvement, and adding a fourth processor yields an improvement of 3.6X over a single CPU. As more processors are added to a system, often a point is reached where performance no longer improves or even decreases due to hardware, kernel, or application software limitations. The goal is to improve performance by enabling multiple CPUs to scale as close to perfect as possible, and to the highest possible numbers of CPUs.
One of the keys to obtaining maximum performance is a fast system bus with high bandwidth. The extreme processing power provided by hundreds of high-performance CPUs requires multiple fast paths for handling data between CPUs, caches, memory, and I/O. The system bus found on symmetric multiprocessing systems can quickly become a bottleneck since all traffic from the CPUs uses a single, common bus to access and transfer data. Much higher system performance is available using a non-uniform memory access (NUMA) architecture since CPU accesses to memory within the same node will distribute and reduce the load on the system interconnect (see Figure 1).
A well-designed NUMA system will carefully account for the CPU bus transfer speeds, number of CPUs on any given bus, memory transfer speeds, multiple paths, and other factors to ensure that maximum overall bandwidth can be delivered throughout the system. Drawing an imaginary line through the middle of a system to examine its maximum capacity for transferring data between two halves is called bisectional bandwidth. Figure 2 shows the system bus interconnect for an SGI Altix system designed for overall maximum bisectional bandwidth and performance. In this diagram, each C-brick is a rack-mountable module containing four CPUs and each R-brick is an SGI NUMAlink module used to connect together and make a 128p SGI Altix system.
A computer architecture that is well balanced and built for maximum performance is essential to achieving good system scalability. If the hardware doesn't scale, neither will the Linux kernel or the user's application.
Linux Kernel ScalabilityLinux was originally designed for smaller systems. Extending Linux to scale well on large systems involves extending various sizes and tables managed by the kernel, and then optimizing the performance for high-end technical computing. Thanks to the solid design and wide community support, Linux has adapted well to large systems.
SGI kernel engineers found that while they were clearly the first to run Linux on large system configurations of this kind, the Linux community had already done an excellent job reworking and addressing many of the issues related to Linux scalability. The types of changes made by SGI and others within the community include extending resource counters sizes, extending bit-mask sizes, and fixing commands and tools to support more than double-digit CPU numbers. Other changes included adding NUMA tool commands to help manage larger memory sizes more efficiently, increasing the limit on open file descriptors and on file sizes, and reducing boot time console messages generated by each processor, since administrating and troubleshooting would otherwise be unmanageable on systems with large CPU counts.
Once the kernel was modified to accommodate the resources of a larger system, SGI engineers focused on getting Linux to scale and perform well. One way to find scaling problems for a 256-processor system is to turn up the stress knobs while using a much larger configuration, such as a 512-processor system. Problems that otherwise would be difficult to pinpoint become obvious. Developing and testing on these larger configurations enabled the SGI engineering team to find and fix many problems that affect all multiprocessor systems of all sizes. SGI kernel engineers used several large configurations in this manner to run a variety of different HPC applications, benchmarks, and custom tests to identify and diagnose Linux scaling problems. Figure 3 shows an early 512 processor SGI Altix system, ascender, which was used by SGI kernel engineers to find and fix scaling problems.
Such testing uncovered a number of areas to change for improving scalability. For example, some system-wide kernel variables were converted to per-processor variables. This reduces memory contention on shared data such as global kernel performance statistics, since this data could be maintained separately, then combined only when needed for reporting purposes. Other scaling improvements included finding and eliminating high-contention spinlocks, reducing spinlock contention in timer routines, optimizing process scheduling algorithms, changes in the buffer cache to use per-node data structures, improved translation lookaside buffer algorithms, improved parallelism of page fault and out-of-memory handling, and identifying and removing hot cache lines due to false sharing.
Bringing It All TogetherA well-designed hardware system combined with the Linux optimizations described here enables hundreds of processors within a system to access, use, and manipulate shared resources in the most efficient manner possible, enabling users' HPC programs to fully exploit the available system resources to do real work. The following three examples demonstrate the dramatic scaling and performance improvements being achieved with Linux on systems with processor counts of 128, 256, and larger.
The first example (see Figure 4) shows how adding processors to a system can dramatically reduce the elapsed time for the bioinformatics HPC application HTC-BLAST (High Throughput Computing - Basic Logical Alignment Search Tool) to process 10,000 queries with 4,111,677 total letters on a human genome database with 545 sequences and 2,866,452,029 total letters. In particular, notice that a system with 128 processors ran 1.77X faster than a system with 64 processors.
The next example (see Figure 5) shows the scaling and performance improvements achieved using a computation fluid dynamics application on an automobile external flow problem with a model size of 100 million cells. In this case the total elapsed time continues to decrease as the system configuration is extended from 64 to 256 processors.
Finally, the third example (see Figure 6) shows scaling results for an OpenMP code called Cart3D, developed and used extensively by the NASA Ames Research Center to study flows for the space shuttle. NASA Ames Research Center, known for pushing the limits of computing in pursuit of fundamental science, achieved almost 90% scaling efficiency while running this HPC code on a 512-processor SGI Altix system. SGI and NASA engineers collaborated to identify and fix many Linux scaling issues to achieve a dramatic new breakthrough on system scalability with Linux. The NASA Ames Research Center's system used for this work is shown in Figure 7.
SummaryThe performance and capabilities of Linux for server environments have improved dramatically in just the last year. Scientists and others are now routinely using single-system Linux configurations with hundreds of processors to solve complex problems faster and with greater ease than had been thought possible. Testing and developing on these large configurations have proven invaluable for improving the reliability and performance of Linux on configurations of all sizes. The synergy of these scaling improvements combined with the open development model has enabled the continued advancement of Linux to become the superior operating system choice for delivering performance and stability in all environments.
SYS-CON Events announced today that FierceDevOps will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. FierceDevOps keeps software developers and IT operations personnel updated on the latest news and trends around the rapidly evolving role of the traditional IT worker.
Mar. 30, 2015 02:45 AM EDT Reads: 1,438
GENBAND has announced that SageNet is leveraging the Nuvia platform to deliver Unified Communications as a Service (UCaaS) to its large base of retail and enterprise customers. Nuvia’s cloud-based solution provides SageNet’s customers with a full suite of business communications and collaboration tools. Two large national SageNet retail customers have recently signed up to deploy the Nuvia platform and the company will continue to sell the service to new and existing customers. Nuvia’s capabili...
Mar. 30, 2015 01:00 AM EDT Reads: 1,473
WHOA.com has announced the newest addition to its data center footprint with the expansion into Equinix's newest state-of-the-art facility: DC-11 Washington, DC IBX+. Located in Ashburn, VA, this data center expands Whoa.com's presence to meet rapidly expanding customer demand for secure cloud solutions. Equinix, Inc. operates International Business Exchange™ (IBX®) data centers in 32 markets across 15 countries in the Americas, EMEA, and Asia-Pacific. Equinix is committed to operating faciliti...
Mar. 30, 2015 12:00 AM EDT Reads: 1,146
SYS-CON Events announced today that the DevOps Institute has been named “Association Sponsor” of SYS-CON's DevOps Summit, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. The DevOps Institute provides enterprise level training and certification. Working with thought leaders from the DevOps community, the IT Service Management field and the IT training market, the DevOps Institute is setting the standard in quality for DevOps education and training.
Mar. 29, 2015 10:30 PM EDT Reads: 1,142
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
Mar. 29, 2015 10:00 PM EDT Reads: 1,814
SYS-CON Events announced today that Cisco, the worldwide leader in IT that transforms how people connect, communicate and collaborate, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cisco makes amazing things happen by connecting the unconnected. Cisco has shaped the future of the Internet by becoming the worldwide leader in transforming how people connect, communicate and collaborat...
Mar. 29, 2015 07:00 PM EDT Reads: 5,236
WSM International is launching a DevOps services division that offers assessment, consulting and implementation to large enterprises and organizations with complex infrastructures. This is the first independent services company to create a dedicated practice to help organizations looking to transition to the DevOps model. The concept of DevOps is to blend information technology (IT) software development with operations to optimize the computing infrastructure according to the specific needs of ...
Mar. 29, 2015 07:00 PM EDT Reads: 1,541
SYS-CON Events announced today that robomq.io will exhibit at SYS-CON's @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. robomq.io is an interoperable and composable platform that connects any device to any application. It helps systems integrators and the solution providers build new and innovative products and service for industries requiring monitoring or intelligence from devices and sensors.
Mar. 29, 2015 06:00 PM EDT Reads: 1,486
The WebRTC Summit 2014 New York, to be held June 9-11, 2015, at the Javits Center in New York, NY, announces that its Call for Papers is open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 16th International Cloud Expo, @ThingsExpo, Big Data Expo, and DevOps Summit.
Mar. 29, 2015 06:00 PM EDT Reads: 1,605
Temasys has announced senior management additions to its team. Joining are David Holloway as Vice President of Commercial and Nadine Yap as Vice President of Product. Over the past 12 months Temasys has doubled in size as it adds new customers and expands the development of its Skylink platform. Skylink leads the charge to move WebRTC, traditionally seen as a desktop, browser based technology, to become a ubiquitous web communications technology on web and mobile, as well as Internet of Things...
Mar. 29, 2015 06:00 PM EDT Reads: 1,856
Hosted PaaS providers have given independent developers and startups huge advantages in efficiency and reduced time-to-market over their more process-bound counterparts in enterprises. Software frameworks are now available that allow enterprise IT departments to provide these same advantages for developers in their own organization. In his workshop session at DevOps Summit, Troy Topnik, ActiveState’s Technical Product Manager, will show how on-prem or cloud-hosted Private PaaS can enable organ...
Mar. 29, 2015 05:45 PM EDT Reads: 1,267
DevOps tasked with driving success in the cloud need a solution to efficiently leverage multiple clouds while avoiding cloud lock-in. Flexiant today announces the commercial availability of Flexiant Concerto. With Flexiant Concerto, DevOps have cloud freedom to automate the build, deployment and operations of applications consistently across multiple clouds. Concerto is available through four disruptive pricing models aimed to deliver multi-cloud at a price point everyone can afford.
Mar. 29, 2015 05:00 PM EDT Reads: 966
Today, IT is not just a cost center. IT is an enabler and driver of business. With the emergence of the hybrid cloud paradigm, IT now has increasingly more capabilities to create new strategic opportunities for a business. Hybrid cloud allows an organization to utilize multi-tenant public clouds, dedicated private clouds, bare metal hosting, and the associated support and services for the right use cases through an on-demand, XaaS model. This model of IT creates tremendous opportunities for busi...
Mar. 29, 2015 05:00 PM EDT Reads: 3,178
Docker is an excellent platform for organizations interested in running microservices. It offers portability and consistency between development and production environments, quick provisioning times, and a simple way to isolate services. In his session at DevOps Summit at 16th Cloud Expo, Shannon Williams, co-founder of Rancher Labs, will walk through these and other benefits of using Docker to run microservices, and provide an overview of RancherOS, a minimalist distribution of Linux designed...
Mar. 29, 2015 04:15 PM EDT Reads: 2,438
Business as usual for IT is evolving into a “Make or Buy” decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud busi...
Mar. 29, 2015 04:15 PM EDT Reads: 1,427