SYS-CON MEDIA Authors: Elizabeth White, Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz

Related Topics: Agile Computing, Java IoT, Microservices Expo, Government Cloud

Agile Computing: Article

The FTC Is More Responsive than NASA

A naively relaunched site might go down on the first cron run in a flood of scheduled posts and emails

The United States Congress managed to avoid a default with a last-minute agreement. But the reboot's still in progress, and many federal government servers and services remain shut down. The cause of the online blackout is this set of guidelines released by the Office of Management and Budget, and in particular their answer to this question:

Q5: What if the cost of shutting down a website exceeds the cost of maintaining services?
A5: The determination of which services continue during an appropriations lapse is not affected by whether the costs of shutdown exceed the costs of maintaining services.

This might seem ridiculous at first glance, but anyone who builds websites shouldn't be surprised that they included this directive. Keeping a site running isn't just a matter of paying hosting bills, and even the most well-crafted architectures never stop needing a hand at the wheel (especially where security is concerned). This is an unusual time, and it comes with unusual traffic patterns: the role of government is being questioned, and nothing brings in pageviews like national political scrutiny. Having a .gov domain is a major liability when you don't have staff waiting to perform disaster recovery.

So things look bad now - sites down, no timeline for return. The shutdown guidelines didn't instruct agencies to create plans to relaunch their web properties, but they'll need to have one in place if they want it to go smoothly. A naively relaunched site might go down on the first cron run in a flood of scheduled posts and emails. ‘Open data' government sites will get slammed by scrapers trying to make up for lost time. Exciting new bugs will pop up for data-driven government sites that never made plans for backfilling missing data or coping with null values. And of course, every site will have to deal with the traffic from rubberneckers - as soon as the news story breaks that a given agency's site is up, people who would otherwise never consider looking at it will scope it out.

This has led to an interesting question around the office: which agencies are best prepared to turn the lights back on? We do synthetic monitoring, so we're prepared to figure it out.

AppView Web synthetic user experience data for a set of government websites affected by the shutdown.

The first step in setting up synthetic monitoring is to choose a source. I went with our web monitor in Ashburn, Virginia because its proximity to Washington, D.C. makes it a solid proxy for the congressional staffers that drive a disproportionately high amount of traffic to breaking stories.

Next up, I chose several monitoring targets based on stories from VentureBeat, theWashington PostComputerworld, and Politico. Many government sites could've made the list, but I narrowed it down to five with a high impact on researchers, scientists, and the open government movement:

Finally, I defined each site's scripted transaction with our Selenium-based Firefox script recorder plugin. Since each site's layout is unique, I went with simple transactions across the board. That meant focusing on common user actions (browsing for recent news) instead of more complicated workflows (registering for accounts or logging in).

Pretty straightforward, right? There's just one caveat to be aware of: I opted to record these scripts on the Internet Archive's cached, pre-shutdown versions of the sites. That means that they'll intentionally fail when they're run against a shutdown splash page. By setting it up like this, I'll only see a green light when everything is back to business as usual.

And, well, it didn't take long to get there! My guess had been that NASA would be back first, but in the end, it was actually the FTC website that crossed the finish line less than two hours into the 17th. Someone, somewhere stayed up all night to press that button and turn the site back on again - and even though I wasn't watching, I know right when it happened and what impact it had on response time.

The FTC site has sub-one-second response time.

The FTC site has sub-one-second response time.

On the other hand, it looks like Census.gov and Data.gov were scheduled to open business at 9 AM this morning:

Census.gov saw a brief latency spike a few transactions after it first started serving pages.

Census.gov saw a brief latency spike a few transactions after it first started serving pages.

500px-gov-data

It seems that being a tech-savvy agency doesn't have much impact on responsiveness,
because NASA and NIST are still down as of this blog post:

500px-gov-nasa

NASA's site won't get past the homepage, as of this article.

500px-gov-nist

NIST's site may actually be running from some locations, as our web monitor in Virginia can access part of it even though I see a nopaywall. However, it took over a minute to load a largely static page.

All told, I think the FTC deserves some real credit for their user experience. It takes less than one second to get information on making a FOIA request, and they were ready for business almost eight hours before some other government sites. On the other hand, I expected a much better showing from NASA and NIST. I already guessed wrong, but what's your take as to when they'll be back up and running?

More Stories By James Meickle

James started as a hobbyist web developer, even though his academic background is in social psychology and political science. Lately his interests as a professional Drupal developer have migrated towards performance, security, and automation. His favorite languages is Python, his favorite editor is Sublime, and his favorite game is Dwarf Fortress.

Latest Stories
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...