Taking the Server Out of Application Management
Just as application managers are coming to terms with managing the availability and performance of tiered and composite applications, they are presented with a new challenge - managing those applications in dynamic environments comprised of both virtual and physical infrastructure.
With today's economic conditions, cost and capacity conscious CIOs are pushing their production datacenters onto the virtualization bandwagon with an even greater sense of urgency. Enterprises initially adopted virtualization for tactical projects in an effort to prove out the new concepts and technologies in production environments. Now, many enterprises are using virtualized platforms as the default for all (or most) of their new datacenter server needs. For these companies, all that was really needed to raise virtualization from a tactical buy to a strategic focus was either an executive mandate or a compelling external event.
The current economic storms (aka the external event) and the resulting corporate belt-tightening (aka the executive mandate) have virtualization soaring. Virtualization solutions (particularly with the improved capacity planning and power management available in server virtualization packages) are on the top of most CIOs' to-do lists. However, the laws of cause and effect still apply: rapid virtualization of a production environment will impact application and service management.
Why? Virtualization removes the notion of an application as a predetermined set of features delivered by a predetermined set of software running on a predetermined set of infrastructure - virtualization changes how we need to think about applications.
An application is now a changeable transaction that traverses changeable software services, and is deployed as migrating software stacks. Application performance managers must deal with these changeable transactions and software services while meeting demanding SLAs regarding their application availability and performance, yet virtualization provides the ‘migrating software stacks' that have application managers worried about their ability to do so.
Why? Virtualization takes away the final anchor which stabilized traditional application management solutions - the physical location of application software.
Consider how deeply this assumption is embedded in how we think about application management. For example, how does an application manager, who knows nothing about a particular application, learn about its architecture or relationships? Historically, the starting point has been the physical application server. With the MAC address and the right access levels, a manager could start a discovery process to determine the structure and configuration of the software resident on that server. The configuration of the physical hardware immediately provides clues about the application's peak performance and capacity requirements.
With virtualization, this starting point completely changes. The starting question is not "what is the application?" Instead the starting question is "where is the application?"
Similarly, some of the techniques application managers use to reverse engineer transaction paths and infrastructure relationships also assume that communicating software entities are stationary in their physical location. Application managers often compare a physical topology map of their web, application, and database servers with the transaction paths mapped with real-user monitoring to determine whether a transaction is ‘behaving normally.' In other words, they are using the ‘fact' that web, application, and database server locations do not change in order to determine whether or not the relationships between those servers have changed.