SYS-CON MEDIA Authors: Liz McMillan, Carmen Gonzalez, Zakia Bouachraoui, Roger Strukhoff, David Linthicum

Related Topics: Microservices Expo, Industrial IoT

Microservices Expo: Article

The Costs and Implications of EHR System Downtime on Physician Practices

One hour of software downtime can cost a practice almost $488.00 per physician

Executive Summary
Government funding incentives (ARRA HITECH Act) to implement electronic health record systems (EHR) are driving most physicians towards the selection and implementation of EHR applications that are appropriate to their practice. However, even though the average practice takes more than 120 days to select their EHR solution, 87% of practices spend no time evaluating the service levels and uptime associated with these installations, instead leaving this important criterion in the hands of the software provider. Even when asked, some vendors avoid this growing need and offer no solution at all, leaving it as a point of exposure for the practice. Neglecting the amount of system downtime that a practice might experience could cost the average 5-physician practice nearly $25,000 if the product is down just ten hours during the course of a year. Therefore before selecting an EHR product, the practice should not only consider price, functionality, usability, support, and training. It must also determine the practice’s exposure to the potential effect of system downtime. This will impact the overall practice efficiencies, staff and client satisfaction, and the ability to provide care.

Why Is Controlling Downtime Costs Important
As seen in the AC Group 2010 Healthcare Technology Survey, four of the top five healthcare applications deemed most important over the next four years relate to mission critical clinical applications. These applications include Electronic Health Records (EHR), patient portals, Clinical Information Systems, Clinical Data Repository, and Point-of-care clinical decision support.

Why are clinical applications so important now? A few years ago a group of leading Fortune 500 companies and other large healthcare group purchasers worked with the federal Office of National Coordinator (ONC) to established three standards that healthcare organizations must meet to get group members’ business; number one on the list is implementing an EHR system. Since then, numerous organizations have pressured physicians to start adopting EHR applications. These organizations estimate that clinical applications can improve the quality of patient health and can reduce serious prescribing errors by more than 50 percent. They believe that overall better healthcare monitoring and clinical reasoning can improve patient safety, which in turn equals improved financial value - not just for employers, but also for providers, consumers, and payers of care as well.

The downtime issue escalates when a healthcare organization deploys applications that are used primarily by Physicians. For example, at this year’s MGMA conference, a panel of physicians agreed that system speed and availability was critical in their decision to use an ambulatory EHR application. The panel agreed that, if the system was NOT available a minimum of 99% of the time, then they would not consider the application reliable enough to use in the future.

What this panel failed to consider or realize is that 99% uptime translates into an average of more than 87 hours of solution downtime annually. The cost associated with that amount of downtime is tens of thousands to hundreds of thousands of dollars, depending on practice size.

Cost of Downtime
To help understand the cost of downtime associated with Electronic Health Record (EHR) systems, AC Group conducted a study during 2010 designed to shed more light on this important issue. It was determined that, in the rush to adopt Electronic Health Record (EHR) applications, system availability requirements associated with the underlying IT systems are often overlooked. This can result in significant costs – both financial and operational – based on the probable downtime per year associated with different systems and configurations. As the practice becomes increasingly reliant on electronic records, the software application should have little-to-no downtime, as any downtime can adversely affect the care of a patient and increase operating costs.

To effectively implement critical applications, whether EHR or PMS, the clinical community must be assured that the application will be available and reliable when they need it. To accomplish this, physicians and administrators MUST insure that their EHR vendor’s software and the hardware platform it runs on will operate at a committed level of uptime acceptable to the practice. Software providers may or may not recommend or provide a high-availability platform solution (either hardware or software) for their applications. But, that does not mean practices and clinicians should not make this a requirement for the critical applications that they depend on to run their practices and care for patients. Having said that, physicians should expect that the responsibility to require service levels and obtain availability SLAs will fall to them.

An EHR vendor who cannot provide an availability solution that meets today’s industry definition of high availability –less than one hour of unplanned downtime per year – can easily cost the practice more than $488 per hour of downtime for each physician in the practice. For the average server deployed in most practices to support an EHR application, the expected downtime averages 87 hours per year. Even traditional server cluster options which promise high availability statistically average over four hours per year, with complex operating requirements which add significant additional costs in equipment, maintenance and administration. On a per physician basis these costs, as well as the disruption to operations and patient care, can be staggering.

Healthcare executives’ concept of acceptable downtime, and conversely uptime, for critical applications has historically lagged virtually all other industries. A review of 37 vendor contracts indicated that vendors are only providing an uptime guarantee of 96%. What does 96% availability mean, and is that sufficient? Is 99.99% uptime even attainable at reasonable cost, and what cost is reasonable? What amount of downtime are you willing to accept? What does a vendor’s uptime guarantee really mean to a healthcare organization and patient care? The differences between uptime levels in terms of financial impact and practice disruption will amaze even the most experienced healthcare provider and healthcare IT executive, alike.

Value of Uptime
Time is money. A practice committed to an EHR solution will suffer both financial and care-giving consequences if that system is unavailable to them. How much impact depends on these factors:

  • Level of technology in use by a physicians practice: Practice Management (PM) only, PM and EHR, Clinical Outcomes and Decision Support
  • Size of the physician’s practice
  • The level of uptime (i.e. ability to use the system) delivered by the total solution

Obviously, the greater a practice’s reliance on technology, the greater the impact when the application goes down in terms of the average cost of the outage itself. That is, a practice that still relies on manual data entry and practice management will not suffer the same pain as a full-blown EHR-based practice because they rely on technology less to conduct their routine business. (Conversely, the paper-based practice does not realize the many benefits of a smooth-running EHR solution.) It is not only the time required to manually conduct business during the outage that contributes to cost. It’s also the time required to bring the automated systems up to date post- recovery. We refer to this as the multiplier effect - the average cost per employee for a minute of downtime, plus the cost of time needed to return to normal operation after system recovery.

The 2010 Downtime Study
AC Group conducted time/motion studies of various size practices, in varying stages of EMR deployments, to determine the average cost per minute of downtime for small, mid-size and large physician practices. The three-month study was completed on November 1, 2010 and was based on actual healthcare organization man-hours, salaries, and workload numbers by individual enterprise department. The study evaluated the amount of time each practice spends (1) collecting, (2) reporting, (3) organizing, and (4) disseminating information. Note: throughout this paper we use the overall term “Information” to describe these four functional areas. Some of the detailed findings included:

  • Nursing spends 57.4% of its annual man-hours on automated “information”, (collecting, reporting, organizing, and disseminating information).
  • Non-nursing departments (registration, scheduling, billing, etc) spend 87.4% of their man-hours on automated “information”.
  • The typical EHR-enabled practice spends 71.45% of all man-hours on automated “information”. This is compared to the average non-EHR enabled practice that only spends 28% of its annual man-hours on automated “information”, but spends an additional 48% of man-hours on manual “information”.

Using this information, the AC Group was able to identify and validate that for every minute an EHR application is down the average physician practice spends 2.15 minutes to perform the required tasks manually plus the time required to update the computer systems once the system is back up and operating. Using the average practice’s actual financial, man-hour, and workload statistics, the AC Group determined that the average cost of downtime was $8.13 per minute per provider, which equates to a median across all practice sizes and specialties of almost $488 per hour.

Calculating the cost of downtime

Based on the findings from this study, compromising on software and/or hardware uptime assurance can be financially punishing in the long-run, not to mention operationally disruptive with increased susceptibility to data-entry errors during recovery. This discovery renders even more apparent the extreme importance of evaluating system uptime specifications when measuring and rating the performance of a vendor’s product.

Again it is important to note here that the overwhelming majority of EHR software vendors will not include uptime SLAs in their contracts without specifically being required to do so. If required by the healthcare organization, almost every vendor indicated that the cost of the system would increase from 5% to 20% for each 1% increase in uptime guarantee over and above the standard 96% uptime level. Considering the uptime assurance solutions on the market today, there is little justification, if any, to attach such a 5-20% premium for ensuring such mission-critical applications remain available without exception. The best availability products will be industry-standard (i.e. Windows, Linux, x86 processors), require no special skills to operate and manage, will monitor, self manage and automatically remediate system issues, reduce opportunity for human-induced system failures, and deliver an excellent return on investment when evaluated on Total Cost of Ownership (TCO) basis.

The healthcare industry must continue to align its IT expenditures with business initiatives by adapting a comprehensive system for determining IT strategies, expenditures, and staffing requirements based on best practices. Healthcare organizations must drive enterprise-wide systems that sustain a constant innovation cycle in the new competitive environment. To accomplish this, they must learn to match their healthcare organization skills and requirements with their current business environment, or face extinction.

Although software vendors rarely provide uptime commitments, a physician’s practice should require written documentation that the proposed EHR application meets today’s generally accepted standards for high availability (i.e. less than one hour of unplanned downtime per year on average) in actual installations, or that its software is certified to run on high-availability products from other vendors with no performance impact.

The challenge ahead for healthcare organizations is enormous. We all know that operational and cost barriers exist when selecting a system; nevertheless, the potential benefits of clinical applications and EHR, in terms of patient safety and quality of care, operational efficiency and cost reduction, competitive advantage, and market share gain are tremendous. We believe that every healthcare organization is in a great position to help enhance recording, reporting, and dissemination of clinical data. To accomplish this, the healthcare organization MUST insure that the vendor can guarantee an adequate system uptime, especially with EHR applications. Those who succeed will receive industry-wide recognition and ultimately reward for their organizations. Key to evaluating clinical systems must be the healthcare organization’s recognition that downtime costs are an important factor for every

healthcare organization. Healthcare organizations must ensure that downtime is minimized through close system management and by working closely with each technology provider.

Availability Options
Regardless of whether the physicians’ office maintains its own IT staff, or has consultants come onsite to manage IT operations, or relies entirely on a third-party to oversee its EHR applications offsite sight unseen, system downtime still levies the same cost on the practice. Any choice made to provide uptime still possesses levels of complexity in operations and ongoing management, and requires a base level of professional skills to operate reliably. It is vitally important, as we have demonstrated, that those who make the purchasing decisions and/or take responsibility for IT system health know the level of uptime they should expect from offerings provided by their software provider, value-added reseller, or managed service provider.

Following is a description of common platform, software and hardware offerings deployed in conjunction with EMR applications. Each has its own characteristics and ability to provide uptime assurance.

Robust standalone server: The current generation of x86 servers includes features like redundant fans and power supplies, hot-plug PCI card, and mirrored memory, offering improved reliability over unadorned commodity servers. They can be expected to run at 99.0% uptime reliability, with complementary average downtime of more than 7 hours per month. The issue with standalone servers is not so much reliability of the individual server, but rather the time it takes to affect repair or replacement, and return EMR applications to full production.

Cold standby: Keeping a second server on hand to provide back-up is an option, but undesirable for mission-critical such as EMR applications. Connecting a replacement server to a shared disk array, or moving disks from the primary server to the back-up generally requires a skilled administrator. It also scarcely improves protection against downtime, although it could provide some benefit to getting online a bit more quickly (assuming the back-up server works when called upon).

Data replication: This off-the-shelf software option replicates data files synchronously or asynchronously from one or more servers to a target server. Should a source server fail, the target server takes over either automatically or through manual intervention. Depending on the product chosen and overall system configuration complexity, this option may push uptime reliability to as high as 99.9%, or three-quarters of an hour of downtime per month.

High availability clusters: Two or more ordinary servers connected by software into a single network are the basis of an HA cluster. The cluster configuration is difficult to build, operate and manage. Although Microsoft has made recent improvements to Windows Cluster Service, this approach to availability remains fundamentally complex. Clustering is re failure-recovery technology. When one node in cluster fails, the application fails over to a survivor node – not as easy or as quickly as it sounds. Invariably, there will be failover delay, and data that had not been committed to memory (in-flight data) will be lost. This custom-built option can achieve 99.9% uptime and, if meticulously designed, configured, administered, and maintained by skilled staff, may achieve 99.95%. Clusters are frequently cited as offering the highest availability, but that is only because it is the best uptime major server vendors can offer.

Virtualization software: Many view virtualization software as an availability solution unto itself. It is not. It does have such attributes, but they come at a cost, both in licensing, additional equipment, stringent configuration requirements, and management tools, not to mention the complexity of, and skills required to use virtualization software in the first place. Availability will be no better than a cluster, but virtualization does provide other benefits such as reducing the number of physical servers needed to support a greater number of applications, and all the cost savings associated with reducing server count.

High availability software: Software products have come to market in the past few years that provide better availability on commodity servers than clusters, are significantly less expensive, as well as simpler to use and maintain. The software essentially creates a high-availability computing platform upon which to run EMR applications. The best of these products proactively manages and monitors its own operation, prevents downtime and data loss from occurring (unlike clusters), supports multiple operating systems, and has virtualization software built in. Regardless of whether the physician installs the software or the value-added reseller delivers it as part of a total solution, it is easily accomplished and produces a high-availability computing platform exceeding 99.99%, i.e. less than five minutes of downtime per month on average.

Continuous availability servers: Designed specifically to prevent downtime and data loss from occurring in the first place, “fault-tolerant” servers include complete component redundancy and error detection circuitry. Automatic fault detection and correction is engineered into the design so that most errors are resolved without the user or the application being impacted at all. You can expect better than 99.999% uptime from this platform, or just seconds of downtime per month. While the cost may be higher at the outset, the uptime performance and operational simplicity of fault tolerant server is very cost effective from the perspective of total cost of ownership (i.e. advantages in software licensing, maintenance, staffing, software patching and updating, etc.).

More Stories By Mark Anderson

Mark Anderson, CEO of AC Group, Inc., is one of the nation's premier IT research futurists dedicated to health care. He is one of the leading national speakers on health care and physician practices and has spoken at more than 850 conferences and meetings since 2000. He has spent the last 37+ years focusing on health care – not just technology questions, but strategic, policy, and organizational considerations.

For the past eight years, Mr. Anderson has spent the majority of time in the evaluation, selection, and ranking of vendors in the PM/EHR health care marketplace and during those years has published a semi-annual report on the Digital Medical Office of the Future. His EHR evaluation decision tool has been used by more than 25,000 physicians since 2002.

Latest Stories
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Platform9, the leader in SaaS-managed hybrid cloud, has announced it will present five sessions at four upcoming industry conferences in June: BCS in London, DevOpsCon in Berlin, HPE Discover and Cloud Computing Expo 2019.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
When you're operating multiple services in production, building out forensics tools such as monitoring and observability becomes essential. Unfortunately, it is a real challenge balancing priorities between building new features and tools to help pinpoint root causes. Linkerd provides many of the tools you need to tame the chaos of operating microservices in a cloud native world. Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. I...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions: - How does application security work on this platform? What all do I need to secure? - How do I implement security in pipelines? - What about vulnerabilities discovered at a later point in time? - What are newer technologies like Istio Service Mesh bring to table?In this session, I will be addressing these commonly asked ...
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.