SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

From Scala Unified Logging to Full System ObservabilityPart 1 of 3: Our Original State of Logging

Jonathan is a platform engineer at VictorOps, responsible for system scalability and performance. This is Part 1 in a 3-part series on system visibility, the detection part of incident management.

These days, with infrastructures spanning tens, hundreds, even thousands of running instances, piping a log file into less is no longer an acceptable means of log research and debugging. Instead, sending them to a log aggregation service with products like Sumo, Elastic, or Splunk is commonplace because searchability is king.

Unfortunately, the pursuit of searchability can lead to undesirable side effects like unreadable, inconsistent, and just plain ugly log statements. It invades our codebases with even more custom formatting (on top of string interpolation, etc) that’s not only distracting but also hard to do well with anything less than super-human string-formatting skills. In short, the side effect on log statements can be detrimental.

Logging is just the starting point

First off, let’s make sure to bring a little context to this pursuit. At VictorOps, we use logging as a research and debugging tool. However, logging isn’t and shouldn’t provide the primary heartbeat on your systems. That’s when metrics comes into the picture, which we’ll discuss in Part 3 of this series. Before going there, let’s talk about where we started with logging at VictorOps. We needed some major improvements in this first line of troubleshooting for when things go bad.

As Dave Hahn, a senior SRE from Netflix, recently shared with us, “Be willing to have a problem before you solve it.” In line with that advice, we recently identified multiple problems to solve relating to the research and debugging done through our logs. To top it all off, when I noticed that our logging interfaces were not unified, it became clear that it was time to make both our logging interface and log output great again.

I hope that our experience at VictorOps will give you ideas on how to improve logging at your organization.

The current state: how we use logs at VictorOps

Sumo Logic is our logging platform and we use it heavily throughout the development lifecycle.

There are four primary ways we use logs:

  1. To get visibility on what’s going on during releases. Through logs, we can see if there are errors that persist after a release. If so, there is probably a hole in our altering – some problem that we aren’t yet monitoring.
  2. To create VictorOps incidents for relevant alerts. When we know that a particular log statement is indicating a problem where someone needs to get involved, we hook a scheduled Sumo search up to the VictorOps platform to create an incident out of it. The goal for most of these alerts is to migrate them to a metric based alert instead of basing it on a log statement – more on that in our metrics discussions in Part 3 of this series.
  3. To see how something is working in production. We might want to see how some new feature is working in production, so we’ll review the log statements. The production environment is always the most valuable for feedback because that’s where real customers have real accounts, alerts, users, and escalation policies. It may look fine in staging, but if there is a use case we didn’t test for that shows up in production, you can see the details in a log.
  4. To investigate high-dimensionality information. Organization, user, and API key (and for that matter, any sort of UUID) are all great examples of metadata that typically won’t be available in a metric and thus logging (or eventing) is where we’ll find that data.

We had three main players in our logging

We have a Scala backend that used three different logging frameworks. Some code used the SLF4J logging framework. SLF4J is widely used and provides a rich feature set. Other code within Akka actors used the Akka actor logging, which has a scaled down interface and feature set and is configured to use SLF4J. Some of our Play code used Play’s own logging, which is extremely simplistic, and is also configured to use SLF4J. All of these were configured with the SLF4J native Logback implementation. Here are some details:

SLF4J

SLF4J is likely the most widely used java logging facade with multiple implementations and a massive user base. Performance is dependent solely on your configuration of the appender that you’re using. By default, logback uses a synchronous appender, but you can easily configure an asynchronous appender. A synchronous appender will use the calling thread to actually write the log statement to file/network, whereas an asynchronous appender lightens the processing load on the calling thread by simply handing over the log statement to the appender to write to file/network at some point in the future.

Akka logging

Akka’s actor-based logging is event driven and is easily configured to use SLF4J. In the actor itself, you say log.info(“this message”), and behind the scenes, it sends an event to the system’s event stream and it’s done. This takes up almost no overhead to create that log statement because it goes somewhere else to be written.

Play logging

Play has its own simplistic logger that’s much more stripped down than the Akka logger and by default uses SLF4J. Play offers up to two arguments: the string that you’re logging, and an optional exception. The most recent version (2.6.x) has added support for SLF4J markers.

Why change how we do logging?

Strategy concerns

These concerns have to do with the various strategies taken by these different logging interfaces.

  • Call-site performance: All SLF4J interfaces rely on the caller to provide pre-computed strings and arguments prior to checking if that log level (info, debug, trace, etc.) is enabled. There are simple ways around this, like Play’s interface that uses a by-name argument for the string. This essentially creates an anonymous function that is executed only after the log level has been checked. For example, without by-name arguments, the statement below will require the mkString method to execute on a potentially large collection prior to the info method checking whether or not info level log statements are enabled. log.info(s“Team $team has users: ${users.mkString(separator)}”)
  • Conflicting interfaces: The largest effect of conflicting interfaces is developer confusion and frustration. The next problem is that it leads to incorrect log statements. If logs are to save you when things go awry, then an incorrect log statement is like a carabiner with a broken arm–looks like a useful thing but is completely useless for the intended user. For example, below are the error methods from these three interfaces. Notice how the location of the Throwable argument changes? Now, imagine working in a codebase where all three of these interfaces are being used. A little scary.
    • SLF4J: void error(String msg, Throwable t)
    • Akka: def error(cause: Throwable, message: String): Unit
    • Play: def error(message: ⇒ String, error: ⇒ Throwable)
  • Appender performance: All three of these have configurable backends and appenders, but it is worth noting that any interface you use will need to have its configuration examined. Most default appenders are synchronous and therefore write the log statement to file (or whatever destination) at the call-site. However, this can be changed easily by configuring an asynchronous appender. This clearly improves call-site performance by requiring only the string to be built before asynchronously handing it off to the appender, which will write the statement to file on its own time.

Developer concerns

How did using multiple logging libraries affect the developers?

  • Too many decisions: Choosing between three different loggers for any given class.
  • Conflicting interfaces: From a developer perspective, this causes confusion and requires you to pay more attention to your logger than you really should.
  • Inconsistency: Having more than one logger in a class, which is clearly unnecessary, and having different types inconsistently named, e.g. log and logger.

Functionality needs

What functionality do the developers need for a maintainable codebase and effective log portfolio?

  • Unified interface: A single interface allows you to add new features in one place and enables the power of easily refactoring logging on a large scale.
  • Support for log variables: Extracting specific information from a log statement is easier if it’s been given special formatting. Once standardized, this can be utilized in our Sumo queries.
  • Implicit loggers for utility classes: Utility classes lack their own identity in terms of data flow. Implicitly passing in a logger, which has identifying information from the caller (its class and log variables), provides rich log statements within utility code.
  • Further consistency: This equates to icing on the cake. Things like a very simple Logging trait to standardize the log field name, logger name (used when writing the log statements), and the logger identity (based on log variables).

Up next

Now that we’ve set the stage, in Part 2, we’ll explore how we addressed these concerns in order to make logging great again.

The post From Scala Unified Logging to Full System Observability
Part 1 of 3: Our Original State of Logging
appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.

Latest Stories
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...