Most of the enterprises want to move their workloads to Cloud and with multiple cloud environments around with different pricing models, it becomes crucial to understand the workload performance and its resource usage trends in different cloud environments. Keeping the application memory footprint small can result in cost savings to cloud user when the cloud pricing model is based on memory usage - GB/hr. Faster application startup time and rampup time can result in cost savings when the pricing model is based on cpu used.
Startup time, CPU and memory resources used and scaling are critical attributes in the cloud, as the focus is not on long running applications but instead on ephemeral containers. As containers such as docker are vastly used in the cloud, the runtime needs to be tested in container environments. And it is important to get the best metrics for your workload in Cloud environment.
In this talk, we will give you the guidelines on how to choose a configuration for your workload and what paramters needs to be tuned to get the best metrics (can be Throughput, Startup Time,Footprint, CPU consumed) for your workload in a Public and Private Cloud environments and the importance of Performance testing in Cloud. Every workload has different requirements. Based on Memory, Startup and Throughput requirements it is important to tune the parameters in the Cloud configuration.
We will talk about how the application performed in different environments such as Bare-metal, on a VM and on a Cloud VM and discuss some of the factors that affected the performance on VMs and how to mitigate them. Based on our experience we have come up with a list of performance related tunings at all levels - Hardware, OS, Java, App Server (Liberty) and Docker that we would like to share with the audience so that they can tune their settings to get the maximum throughput or optimize for resource usage.