Evolution and Old School APM - Executive Viewpoint 2013 Prediction: Nastel Technologies

By Charley Rich (Profile)
Share
Wednesday, January 23rd 2013
Advanced

Much has been made of the cloud as the great enabler of universal access and improved economics. But not everything will change in this brave new world. Applications will still need to be monitored. Compliance regulation will still need to be met. And users will still be unsatisfied. That may never change. Will application performance monitoring be affected by this move? Yes. Old school APM need not apply, as they say, because the situation will be different in 2013.  

The amount of data generated from the plethora of cloud applications will expose a flaw many application performance monitoring (APM) solutions have, namely that they are much better at collecting data than making sense of it or analyzing it. I have spoken to firms that are already receiving more events from their monitoring systems per day than they can process and understand. They are falling behind and finding themselves at risk.

In 2013 we will begin to focus on the analytics portion of APM. I predict that this will become a two-pronged or two-tiered approach. The first phase will be to rationalize the data. In this phase the “big data” received from monitoring events will be reduced by orders of magnitude down to a manageable amount.

How will this be done? Pattern recognition. Tools such as complex event processing (CEP) will be used to parse through the data streams searching for early warning indicators of IT problems. One of the key values from this first phase will be the reduction in false positives. These are indicators that there is a serious problem when there really isn’t. Think of this as noise reduction. If you’ve ever used the Bose headphones that cancel ambient noise on a plane, you know what I mean. Also, correlation of events from multiple sources will be done. Event data may be coming from several monitoring tools, third-party sources, even market data feeds. And this approach will need to scale to handle millions of events per second.

The first analysis tier will create “facts” that are metrics derived from events received. These metrics encapsulate trends, and by using various statistical analysis algorithms they determine in real-time what is “business normal” and what is not. At this point “Big Data” has been turned into “small meaningful data.”

This reduced set of “small meaningful data” is now processed in the second tier using case-based reasoning to look for business impact. In this phase we map situations against past cases to identify how close we are in terms of percentage of similarity to a past case. We might be monitoring bond trading and attempting to ensure we are compliant with regulations such as Dodd-Frank. And, yes, I predict that will still be with us in 2013. We might compare for example Trade ID, MQ Queue status, report time and economic activity. Then, look back historically and match this to a past case. From this we can now take a collection of facts and describe it as a business situation and know its true criticality. Not old school APM, but analytics-driven APM.