SYS-CON MEDIA Authors: Liz McMillan, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Yeshim Deniz

Blog Feed Post

Is the Facebook DC Architecture right for you?

A few weeks ago Facebook announced their new datacenter architecture in a post on their network engineering blog. Facebook is one of the few large web scale companies that is fairly open about their network architecture and designs and it gives many others the opportunity to see how a network can be scaled, even though the scale is well beyond what most will need in the foreseeable future, if not forever.

In the post, Alexey walks through some of the thought process behind the architecture, which is ultimately the most important part of any architecture and design. Too often we simply build whatever seems to be popular or common, or mandated/pushed by a specific vendor. The network however is a product, a deliverable, and has requirements like just about anything else we produce.

Facebook’s and the other web properties’ scale is at a different order of magnitude from most everyone else, but their requirements should sound pretty familiar to many:

  • Intra DC traffic is significantly higher than inter DC or DC to Internet traffic
    • “machine to machine traffic – is several orders of magnitude larger than what goes out to the Internet”
  • Build for growth, the network is not a static entity
    • “ability to move fast and support rapid growth is at the core of our infrastructure design philosophy”
  • Simple Design, easy to operate and maintain
    • “keep our networking infrastructure simple enough that small, highly efficient teams of engineers can manage it”
    • “Our goal is to make deploying and operating our networks easier and faster over time”

Anyone with a decent sized datacenter infrastructure should find these same basic requirements back in their own network needs.

With the requirements in hand (and a few more I am sure), Facebook created clusters of racks with servers and supporting networking equipment and then built a hierarchy of network equipment on top. Each rack in a cluster contains a regular ToR switch with 4 40GbE uplinks to the first spine layer. While not explicitly stated, these ToRs likely support 48 to 56 server side 10GbE ports (this could be as high as 80 when using 96 port switches). That makes a rack somewhere between 3:1 to 5:1 oversubscribed to the fabric.

From these ToR switches, each of these 40GbE is connected to a fabric switch. With 48 ToR switches in a cluster or pod, these fabric switches support 48x40GbE towards the ToR layer. As stated, these switches have the ability to support the same amount of bandwidth up to the next spine layer (I guess Facebook differentiates them in name by calling them fabric switches vs spine switches even though the fabric switches act as the spine for the ToR switches).

This means that each of these pod spine switches needs to support up to 96x40GbE, which makes these mid sized modular switches that have an internal fabric. You cannot make a switch of that size without having some form of internal fabric to connect multiple ethernet ASICs to each other. With simplicity and ease of maintenance in mind, I am sure Facebook picked systems that have an internal CLOS fabric built out of the same ethernet ASICs used for the ToR switches. This also means there is not a very large amount of buffer memory available in the fabric and spine layers, contrary to what many believe is required (we are not among them). Similarly for latency, this is not a low latency fabric by new standards, which may be fine for Facebook’s requirements. Server to server traffic between different server pods may take up to 11 ethernet ASIC hops, some of which are not cut through switching. This may add up to close to 10 microseconds.

The spine plane that connects each of the clusters together is created using the same switch as the cluster spine. It has the ability to scale to essentially a few hundred pods. And that’s big. Bigger than 99% of the rest of the world will need.

This design very modular and can grow inside of a pod and by attaching more pods together with the fabric switches. The challenge however is that the cabling is not trivial unless you get to start fresh and layout enough fiber for the maximum configuration. Facebook has the luxury to regularly build new datacenters, most enterprises are adding to existing infrastructures, in existing buildings where recabling is not easy or cheap. Grow as you go with this design only works if the cabling is provided for the maximum configuration. So while the network is designed for easy expansion and growth, the foundational physical infrastructure has to be planned and executed at maximum size.

Ultimately the Facebook design is a 3 tier hierarchical network, but the top 2 tiers act as a fabric for the ToR switches. Facebook decided to implement the fabric as its own spine and leaf network. Our solution to a similar set of requirements would build a Plexxi fabric connecting ToR switches. ToR switches would connect to only a few Plexxi switches (for redundancy purposes), the Plexxi switches connect to each other to provide a fully programmable fabric. A Plexxi fabric extends by simply adding more switches with only local cabling.

By using switches that all use the same underlying ASIC technology, there is a very common set of limitations to worry about. It is exactly known how large each of the required tables are and those can be carefully engineered. The BGP engineering portion of the Facebook design is not insignificant. The ASICs used are limited in some of their table sizes, which means that IP address schemes need to be carefully designed, again with maximum size in mind.

The network is engineered as a full L3 network, there is no L2 connectivity outside of a rack. For Facebook this works as they own every piece of their application suite. Like it or not, there are many (legacy) enterprise applications and services that either require L2 connectivity, or work simpler in an L2 environment.

I have not touched on a key aspect of the Facebook design: “distributed control with centralized override”. This Facebook variation of SDN has extremely similar foundational thoughts to how we at Plexxi approach the programmability of the network. That will be blog post in and by itself.

I am sure many will take the Facebook design as the new way to design datacenter networks. But please apply your own scaling, extensibility and physical limitation requirements. There are some rather large luxuries a company like Facebook can afford which most others can not.

The post Is the Facebook DC Architecture right for you? appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
Take advantage of autoscaling, and high availability for Kubernetes with no worry about infrastructure. Be the Rockstar and avoid all the hurdles of deploying Kubernetes. So Why not take Heat and automate the setup of your Kubernetes cluster? Why not give project owners a Heat Stack to deploy Kubernetes whenever they want to? Hoping to share how anyone can use Heat to deploy Kubernetes on OpenStack and customize to their liking. This is a tried and true method that I've used on my OpenSta...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
Signs of a shift in the usage of public clouds are everywhere. Previously, as organizations outgrew old IT methods, the natural answer was to try the public cloud approach; however, the public platform alone is not a complete solution. Complaints include unpredictable/escalating costs and mounting security concerns in the public cloud. Ultimately, public cloud adoption can ultimately mean a shift of IT pains instead of a resolution. That's why the move to hybrid, custom, and multi-cloud will ...
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
Take advantage of autoscaling, and high availability for Kubernetes with no worry about infrastructure. Be the Rockstar and avoid all the hurdles of deploying Kubernetes. So Why not take Heat and automate the setup of your Kubernetes cluster? Why not give project owners a Heat Stack to deploy Kubernetes whenever they want to? Hoping to share how anyone can use Heat to deploy Kubernetes on OpenStack and customize to their liking. This is a tried and true method that I've used on my OpenSta...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that pro...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions: - How does application security work on this platform? What all do I need to secure? - How do I implement security in pipelines? - What about vulnerabilities discovered at a later point in time? - What are newer technologies like Istio Service Mesh bring to table?In this session, I will be addressing these commonly asked ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...