Click here to close now.

SYS-CON MEDIA Authors: Liz McMillan, Carmen Gonzalez, Pat Romanski, AppDynamics Blog, Roger Strukhoff

Related Topics: Big Data Journal, Java, Microservices Journal, .NET

Big Data Journal: Article

Detecting Anomalies that Matter!

Like needles in a haystack

As Netuitive's Chief Data Scientist, I am fortunate to work closely with some of the worlds' largest banks, telcos, and eCommerce companies. Increasingly the executives that I speak with at these companies are no longer focused on just detecting application performance anomalies - they want to understand the impact this has on the business.  For example - "is the current slowdown in the payment service impacting sales?"

You can think of it as detecting IT operations anomalies that really matter - but this is easier said than done.

Like Needles in a Haystack
When it comes to IT analytics, there is a general notion that the more monitoring data you are able to consume, analyze, and correlate, the more accurate your results will be. Just pile all that infrastructure, application performance, and business metric data together and good things are bound to happen, right?

Larger organizations typically have access to voluminous data being generated from dozens of monitoring tools that are tracking thousands of infrastructure and application components.  At the same time, these companies often track hundreds of business metrics using a totally different set of tools.

The problem is that, collectively, these monitoring tools do not communicate with each other.  Not only is it hard to get holistic visibility into the performance and health of a particular business service, it's even harder to discover complex anomalies that have business impact.

Anomalies are Like Snowflakes
Compounding the challenge is the fact that no two anomalies are alike.  Anomalies that matter have multiple facets.  They reflect a composite behavior of many layers of interacting and inter-dependent components.  Additionally, they can be cleverly disguised or hidden in a haze of visible but insignificant noise.  No matter how many graphs and charts you display on the largest LCD monitor you can find - the type of scalable real-time analysis required to find and expose what's important is humanly impossible.

Enter IT Operations Analytics
Analytics such as statistical machine learning allow us to understand the "normal" behavior of each resource we are tracking - be it a single IT component, web service, application, or business process. Additional algorithms help us find patterns and correlations between the thousands of IT and business metrics that matter in a critical service.

The Shift Towards IT Operations Analytics is Already Happening
This is not about the future.  It's about what companies are doing today.

Several years ago thought-leading enterprises (primarily large banks with critical revenue driving services) began experimenting with a new breed of IT analytics platform. These companies' electronic and web facing businesses had so much revenue (and reputation) at stake that they needed to find the anomalies that matter -- the ones that were truly indicative of current or impending problems.

Starting with an almost "blank slate", these forward-thinking companies began developing open IT analytics platforms that easily integrated any type of data source in real time to provide a comprehensive view of patterns and relationships between IT infrastructure and business service performance. This was only possible with technologies that leveraged sophisticated data integration, knowledge modeling, and analytics to discover and capture the unique behavior of complex business services.  Anything less would fail, because, like snowflakes, no two anomalies are alike.

The Continuous Need for Algorithm Research
The online banking system at one bank is different than the online system at the next bank.  And the transaction slowdown that occurred last week may have a totally different root cause than the one two months ago.  Even more interesting are external factors such as seasonality and its effects on demand.  For example, payment companies see increased workload around holidays such as Thanksgiving and Mother's Day whereas gaming/betting companies' demand is driven more by factors such as the NFL Playoffs or the World Series.

For this reason, analytics research is an ongoing endeavor at Netuitive - part driven by customer needs and in part by advances in technology.   Once Netuitive technology is installed in an enterprise and integrating data collected across multiple layers in the service stack, behavior learning begins immediately.  As time passes, the statistical algorithms have more observations to feed their results and this leads to increasing confidence in both anomalies detected and proactive forecasts.  Additionally, customer domain knowledge can be layered in to Netuitive's real-time analysis in the form of knowledge bases and supervised learning algorithms.  The Research Group at Netuitive works closely with our Professional Services Group as well as directly with customers to regularly review actual delivered alarm quality to tune the algorithms that we have as well as identify new algorithms that would deliver greater value in an actionable timeframe.

Since Netuitive's software architecture allows for "pluggable" algorithms, we can incrementally introduce new analytics capabilities easily, at first in an experimental or laboratory setting and ultimately, once verified, into production.

The IT operations management market has matured over the past two decades to the point that most critical components are well instrumented.  The data is there and mainstream IT organizations (not just visionary early adopters) realize that analytics deliver measurable and tangible value.   My vision and challenge is to get our platform to the point where customers can easily customize the algorithms on their own, as their needs and IT infrastructure evolve over time.  This is where platforms need to get to because of the endless variety of ways that enterprises must discover and remediate "anomalies that matter".

Stay tuned.  In an upcoming blog I will drill down on some specific industry examples of algorithms we developed as part of some large enterprise IT analytic platform solutions.

More Stories By Elizabeth A. Nichols, Ph.D

As Chief Data Scientist for Netuitive, Elizabeth A. Nichols, Ph.D. leads development of algorithms, models, and analytics. This includes both enriching the company’s current portfolio as well as developing new analytics to support current and emerging technologies and IT-dependent business services across multiple industry sectors.

Previously, Dr. Nichols co-founded PlexLogic, a provider of open analytics services for quantitative data analysis, risk modeling and data visualization. In her role as CTO and Chief Data Scientist, she developed a cloud platform for collecting, cleansing and correlating data from heterogeneous sources, computing metrics, applying algorithms and models, and visualizing results. Prior to Plexlogic, Dr. Nichols co-founded and served as CTO for ClearPoint Metrics, a security metrics software platform that was eventually sold to nCircle. Prior to ClearPoint Metrics, Dr. Nichols served in technical advisory and leadership positions at CA, Legent Corp, BladeLogic, and Digital Analysis Corp. At CA, she was VP of Research and Development and Lead Architect for agent instrumentation and analytics for CA Unicenter. After receiving a Ph.D. in Mathematics from Duke University, she began her career as an operations research analyst developing war gaming models for the US Army.

Latest Stories
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what th...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years u...
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big D...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
Information Technology (IT) service providers have historically struggled between the huge capital expenditure and long development cycles of building their own cloud versus the thin margins and limited flexibility of using public retailers such as Amazon Web Services (AWS). The emergence of wholesale cloud, and the technologies that make it possible, is revolutionizing how and by whom enterprise IT is delivered. Wholesale cloud is the game-changing third option between building your own (BYO) c...
Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful. In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless ...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, will cover the union between the two topics and why this is important. He will cover an overview of Immutable Infrastructure then show how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He will end the session with some interesting case study examples.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.