SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Microsoft Cloud, Machine Learning , Agile Computing

Microservices Expo: Article

100 Years in the Movies: One Evening’s Web Performance

Why one company performed better during this year’s Super Bowl

Both Paramount and Universal celebrated their 100th anniversary last year, which is a long time to be in the movie business. Arguably, both have made some good, some great, and some bad movies. But, during this year's Super Bowl, Paramount showed Universal how to design a ‘fast and furious' web site that stood up to the flood of visitors during and after the game.

This article will discuss not only how Paramount was able to do it, but will also compare Universal and Paramount's Super Bowl web site results, which shines a light on key factors for successful web performance: fewer connections to fewer hosts requesting smaller objects produces a smaller page size having a positive impact on page response time.

To begin, Universal and Paramount are near equals when it comes to their web age. Jumping over to http://web.archive.org, I found Paramount launched its first site 16 years ago in 1997 and Universal's first site came online 15 years ago in 1998. With the same amount of experience on the Web, it's interesting to explore why one company performed better during this year's Super Bowl.

As a result of analyzing web page performance for nearly six years, my experience tells me that the reasons some sites succeed and others don't can fall into three general categories - corporate culture, resources, and experience and knowledge. But even today, with so much information available on web site performance fundamentals, I often see companies forgetting the basics.

Why Universal Was not ‘Fast and Furious'
Looking at Paramount and Universal's site performance for the period from 5 p.m. EST until 11 p.m. EST on Sunday, February 3 (Super Bowl Sunday), I noticed some big performance differences between the two sites. For starters, Paramount's homepage average response time was 966 milliseconds while Universal's was 11.727 seconds.

Comparison of the two sites clearly shows that the differences come down to web performance basics and the fundamental construction of a web page. This includes connections, object count and type, page size, and hosts. This can be seen in the following chart:

†3 objects greater than 200KB

*11 objects greater than 200KB

The Fewer Connections the Better
Paramount designed their Super Bowl site using only nine connections, while Universal used 41 connections:

The number of connections was a significant factor in Universal's poor response time because more connections equated to more bytes transferred. The tradeoffs can be significant as Universal's response time shows.

After the IP address is resolved by a DNS lookup, the number of connections generally sets the pace for page loading. Even though modern browsers are capable of making between 6 and 8 simultaneous connections to the same host, it doesn't mean you have to use them all. Creating TCP/IP requests takes time and resources, and the milliseconds of overhead that each one requires can quickly add up to seconds, especially during a big game night when massive web traffic is expected.

Identify Flying Objects
The amount of time spent making HTTP requests for all the objects can have a marked impact on page response time. The following chart shows that Paramount's homepage contained only 41 objects, while Universal's homepage contained 121 objects:

Once a connection is established the objects will, presumably, just fly into your visitors' browser, right? Not always.

What if these objects are large files and the servers are straining because of an increased load (as during a special event like the Super Bowl). In Universal's case, there were 11 files tipping the scale at well over 200KB each (two files where over 750KB each). Paramount, on the other hand, only had three files exceeding 200KB.

Size Matters
You've probably heard this once or twice - page size matters. Page size is characterized by totaling all the files that make up a web page (typically compressed and in KB).

Here we see Universal's homepage size coming in at a whopping 4995KB while Paramount's homepage comes in at only 1625KB:

Typically, file size isn't too much of an issue during normal surfing, but Super Bowl Sunday is not a normal traffic day. You can agree that five pounds is heavier than two pounds, and it subsequently takes more effort to lift five pounds. This same concept is true for web sites - some are heavy in KB and others comparatively lighter.

In this case, Universal's page was not as ‘fast and furious' as Paramount's because its page size was 3370KB heavier. Newtonian Law of the Internet states it's going to take longer to download heavy pages than lighter ones so long as the access lines are equal.

Host Counts Count
Using the HTTP Archive Trends site (http://httparchive.org/trends.php#numDomains&maxDomainReqs), you can find information on many web site design trends. Two such trends that I find interesting are the average number of domains accessed across all websites and the maximum number of requests (Max Reqs) on the most used domain.

Here's a table comparing Paramount to Universal to the average for all websites:

The difference between the two studio sites is very clear. We can see that Paramount designed a site that was well under the average for number of Domains and Max Reqs on one domain with three and 39, respectively, where Universal was well above at 27 and 75, respectively:

Further, examining the average number of Domain/Hosts alone, Universal used 9x more hosts across their homepage over Paramount. A fast response time can be increasingly challenging to design and there will be some compromises made, and a low host count seems to be an obvious tactic to follow.

Back to the Basics
Comparing the Universal and Paramount Super Bowl web site results highlights some of the key truths of web performance. Primarily, fewer connections to fewer hosts requesting fewer, smaller objects produces a smaller page size and these items can have a positive impact on page response time. Placing these areas on high alert before going live - connections, object count and type, page size, and hosts - may be one of the best ways to ensure a successful Super Bowl any day of the year.

More Stories By Gregory Speckhart

Gregory Speckhart is a Senior APM Solutions Consultant at Compuware.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...