ABSTRACT

Virtualization technology imposes a modest latency overhead to support multitenancy, resource pooling, and other enablers of cloud computing. This overhead creates both tail application latency risk and typical application latency risk. Virtualization adds some processing overhead between application software and the physical hardware resources. This incremental overhead is normally very small, but resource contention from infrastructure software and other applications sharing the underlying resources, as well as other factors, often cause application latency to be somewhat greater than similar natively deployed applications. Some of the causes of virtualized application typical and tail latency risks are as follows: Decomposed infrastructure architectures; Resource placement; Resource contention; Infrastructure technologies and sharing policies; Failure or deficiency in virtualized application configuration; and Monitoring and metrics. The chapter explains how increasing stochastic variations can manifest as increasing latency. Virtualized application latency risks are captured as: tail application latency risk and typical application latency risk.