|By Larry Dragich||
|July 7, 2014 09:00 AM EDT||
Monitoring application performance on the surface and the currents below is a great way to build a performance baseline and provide application fluency. Ironically, the deep dive tools sets in place today still may not provide all the insight you need to quickly resolve anomalous behavior.
Standing back on the shore waiting for an event to go by may not be the best approach for proactive monitoring. Synthetic monitoring (active monitoring) is needed to help reduce the blind spots for critical business applications.
For example, we just experienced a production issue on a fully instrumented critical business application that first appeared nebulous.
During peak volume time the Service Desk was taking calls from users across random locations stating that they couldn't login, however if they were already on the system all was well. Even when the users logged out they could still login again and continue working. Other facts that came in made the issue more perplexing:
What did we use to find the issue then? It was our synthetic monitoring tool that popped an alert on two externally facing applications.
Root Cause? Our Internet provider’s DNS resolution was not working properly. So any machine that needed name resolution that wasn’t already cached for the day, couldn’t get a login page. For further insight, click here for the full article.
Image: Travis Miller/Flickr (Top);