Just as application managers are coming to terms with managing the availability and performance of tiered and composite applications, they are presented with a new challenge - managing those applications in dynamic environments comprised of both virtual and physical infrastructure.
With today's economic conditions, cost and capacity conscious CIOs are pushing their production datacenters onto the virtualization bandwagon with an even greater sense of urgency. Enterprises initially adopted virtualization for tactical projects in an effort to prove out the new concepts and technologies in production environments. Now, many enterprises are using virtualized platforms as the default for all (or most) of their new datacenter server needs. For these companies, all that was really needed to raise virtualization from a tactical buy to a strategic focus was either an executive mandate or a compelling external event.
The current economic storms (aka the external event) and the resulting corporate belt-tightening (aka the executive mandate) have virtualization soaring. Virtualization solutions (particularly with the improved capacity planning and power management available in server virtualization packages) are on the top of most CIOs' to-do lists. However, the laws of cause and effect still apply: rapid virtualization of a production environment will impact application and service management.
Why? Virtualization removes the notion of an application as a predetermined set of features delivered by a predetermined set of software running on a predetermined set of infrastructure - virtualization changes how we need to think about applications.
An application is now a changeable transaction that traverses changeable software services, and is deployed as migrating software stacks. Application performance managers must deal with these changeable transactions and software services while meeting demanding SLAs regarding their application availability and performance, yet virtualization provides the ‘migrating software stacks' that have application managers worried about their ability to do so.
Why? Virtualization takes away the final anchor which stabilized traditional application management solutions - the physical location of application software.
Consider how deeply this assumption is embedded in how we think about application management. For example, how does an application manager, who knows nothing about a particular application, learn about its architecture or relationships? Historically, the starting point has been the physical application server. With the MAC address and the right access levels, a manager could start a discovery process to determine the structure and configuration of the software resident on that server. The configuration of the physical hardware immediately provides clues about the application's peak performance and capacity requirements.
With virtualization, this starting point completely changes. The starting question is not "what is the application?" Instead the starting question is "where is the application?"
Similarly, some of the techniques application managers use to reverse engineer transaction paths and infrastructure relationships also assume that communicating software entities are stationary in their physical location. Application managers often compare a physical topology map of their web, application, and database servers with the transaction paths mapped with real-user monitoring to determine whether a transaction is ‘behaving normally.' In other words, they are using the ‘fact' that web, application, and database server locations do not change in order to determine whether or not the relationships between those servers have changed.
This approach does not work in a virtual environment where system migration is a frequent occurrence. The situation will be complicated further as virtualization of networks and storage picks up steam. As network connections to enterprise data stores and other legacy physical systems are virtualized, the potential for transitioning entire business services without user-perceived disruptions increases, as does the potential for performance management teams to be unaware that the transition has occurred.
While virtualization bolsters the notion of the "dynamic data center", VMware vMotion further complicates locating the application to facilitate proper management. Likewise, SOA and IT Service Management (ITSM) vendors like BMC, CA, HP, and IBM promote the concept of "services," further separating the application from the server, and breaking the application into smaller services that need to be understood and managed across increasingly dynamic environments.
Easy migration of virtual servers loosens that coupling and breaks much of the existing automation in application management solutions. In a nutshell, application performance (including configuration, monitoring, and troubleshooting) has been tightly anchored to the physical server on which application servers reside. Virtualization in production environments lets loose the anchor. This is why early adopters are seeing rapidly shrinking mean-time-between-repair numbers for their critical applications. This is why application managers are screaming about lack of visibility into their applications' performance.
The solution needed for managing applications residing in virtualized datacenters must address that anchoring assumption. Application owners must be able to follow their application wherever it goes. They should be able to see, at a service level, how it is delivering against critical performance indicators or performance benchmarks. They need to see what their application depends on to understand what is impacting its performance.
Additionally, we have to change our perceptions of what comprises an application. The longer we cling to the idea of an application server, a database server, or a web server, the harder it will be to come to terms with the fact that that none of those things is tied to a physical server. The solution's approach and implementation must take the ‘server' out of application management, and truly manage the application itself.
This is an issue that must be addressed sooner rather than later for two reasons. First, enterprise applications touch every aspect of doing business; therefore, application performance management is a business critical activity. Second, there is simply no stopping the virtualization wave; therefore, it is better to meet the application management challenges head-on.
Today's dynamic data center requires an end-to-end application-centric approach- from application dependency mapping to service-level driven triage - found in emerging application service management solutions such as from BlueStripe Software. This approach provides application and IT admins the visibility into the performance of specific applications running on physical and virtual machines to successfully manage their business-critical applications.
Only time will tell if BlueStripe and application service-level performance insight can tackle the full extent of these issues. But there's no doubt this next phase of virtualization necessitates insight and intelligence at the application level, and even more importantly, at the business service level. In other words, it's all about manageability with the emphasis shifting from performance of the physical infrastructure to the management of the applications that impact the nature of today's business.