SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

From Scala Unified Logging to Full System ObservabilityPart 3 of 3: Metrics and the Bigger Picture

Jonathan is a platform engineer at VictorOps, responsible for system scalability and performance. This is Part 3 in a series on system visibility, the Detection and Analysis part of the Incident Management Lifecycle. If you missed them, read Part 1 and Part 2 first.

Ok, logging improvements: great for our team and codebase(!!) but there’s a bigger world out there for monitoring and instrumentation. We’re talking about observability here. It’s a world where we not only have the necessary information for troubleshooting and debugging, but also a detailed understanding of how the running system is performing–with alerts for when things go wrong.

Enter metrics. Metrics help form a well-rounded approach to the monitoring and instrumentation of your systems. They focus on quantitative measurements of success, responsiveness, demand, saturation, etc. If log statements are for debugging your running system, then metrics are for picturing how the system is running and indicating when the system needs attention. Metrics are the system’s heartbeat.

An analogy

When I think of instrumentation, I think of a person (the “system”) with advanced hypertension, undergoing angioplasty to open several clogged blood vessels. After a stressful, harrowing experience in critical care, the team performs the surgery. Fortunately it’s successful, but the patient is weak and has a few weeks’ recovery ahead.

But what if preventative health visits and monitoring (system metrics) had detected his high blood pressure earlier? And what if those earlier results were accompanied by a new diet, exercise, and medication plan? Maybe this patient wouldn’t have needed risky, reactive surgery at all.

Obviously this isn’t a direct correlation but it’s a useful analogy to thinking about the visibility that we have into our own running systems. Do we know the request latency and error rate? The demand for and saturation of given components?

The case for metrics

Now that we’re completing the big picture of monitoring and instrumentation using multiple methods, let’s examine how metrics complement a good log portfolio.

Logs are strings. Strings need to be interpreted. Interpretation typically requires humans. Humans are slow to respond.

Metrics are numbers. Numbers are immediately actionable. Actions can even be taken automatically.

Metrics have metadata. Metadata describes multiple dimensions. Dimensionality exposes additional trends, outliers, and correlations.

How does VictorOps approach metrics?

We are still iterating on our metrics infrastructure and portfolio and through that process we are identifying more KPIs (Key Performance Indicators) for our systems. For quite a while we’ve employed black-box style metrics, a.k.a show me what the customer is experiencing.

Now we’re trying to answer deeper questions, like: “Now that I know that error rate is too high, which Service Level Indicators (SLIs) would help paint a more complete picture of my problem?” Our next step is to explore the white-box SLIs that answer these types of internal system questions. Overall, we want a mixture of black and white-box metrics where the former alert off of customer experience issues and the latter point us to a subset of the system for troubleshooting and auto-remediation efforts.

How do we use metrics?

The goal is to auto-remediate issues that can be safely addressed without human involvement and alert on all remaining issues. So, when instrumentation points to a known problem, that problem is auto-remediated by some script or application and people don’t need to get involved in resolving the problem. This gives us response times well within seconds, and no human could both respond to and remediate the issue in a comparable timespan. For the remainder of scenarios, where human intervention is truly necessary, we want to alert on the problem as early as possible and, ideally, even before it becomes a problem.

Where do dashboards fit?

Alerts and auto-remediation are great, but it’s also helpful to have a visualization of your running system. Well built and intuitive dashboards are great tools for a first responder to troubleshoot and pinpoint a problem by scanning for anomalies. Specifically, we’re talking about dashboards that make anomalies easy to identify without needing a full set of tribal knowledge or a PhD. So, please, be careful what you do with all of that valuable insight into your system. Not all of your metrics must be associated with an alert (or you’ll soon end up with alert fatigue from too much noise). You should aim for intelligent alert thresholds and include them in your dashboards.

Be careful however, as dashboards require a human’s attention in order to be useful…just like logs (at least ones without alerts). It’s both a waste of your time and highly error prone to assume that each person or shift staring at that dashboard will know how to accurately interpret each panel and every time series displayed. This means that care must be taken to not only measure but clearly visualize and indicate the value of your metrics – both via dashboards and alerts.

Don’t just measure ALL the things

Ideally, we want to build out preventative health care that’s not just packed full of instrumentation, but is full of useful indicators of system health–the KPIs. If a metric isn’t used by the team through alerts or dashboards, it’s likely not useful. If a metric doesn’t make it to a dashboard or alert for visibility sake, it will likely become a metric that you have to clean up later. These metrics should be removed, and we should be cognizant (but not fearful) of new metrics falling into the same bucket.

Craving more?

Want to kick it up a few notches? Look into systems that can perform anomaly detection for your teams. This allows you to focus on more complex problems, build auto-remediation, fix newly identified issues, etc.

Want to move into uncharted territory? Give sonification a try, or at least check out this cool podcast from Science Friday: Listening in on scientific data. “When it comes to analyzing scientific data, there are the old standbys, plots and graphs. But what if instead of poring over visuals, scientists could listen to their data—and make new discoveries with their ears?”

Although we won’t dive into the topics of distributed tracing and eventing, it’s important to mention how they can add additional viewports into your system. Tracing breaks down the request path through your system by describing both the path and the timing at each stage. Eventing allows you to perform offline analysis of events that occurred in your system, with full detail. This is similar to logs but intended for computational processing and not human processing.

Wrap it up!

I’ve covered how logs and metrics play huge parts in adding observability to your systems–with tracing and eventing also fitting in to the big picture. Logs provide you with detailed diagnostic information and sometimes a clear enough indication of a problem that you can create alerts off them. Metrics provide you with quantitative insights into the running system and are also an ideal source of alerts. Remember: don’t take on the hard work of adding observability, logs, dashboards,  alerts, and anomaly detection to your platform without making these things truly useful or actionable.

Finally, don’t forget to pay attention to those annoying and often below-the-radar gaps in your observability portfolio; like the logging issue we had in our backend systems at VictorOps. You might find that alleviating these annoyances leads to happier, more enthusiastic teams with a renewed stake in creating better systems.

The post From Scala Unified Logging to Full System Observability
Part 3 of 3: Metrics and the Bigger Picture
appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.

Latest Stories
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...