Capacity Planning in Performance Testing
What is Capacity Planning
Capacity Planning is the process of determining what type of hardware and software configuration is required to meet application needs. Capacity planning, performance benchmarks and validation testing are essential components of successful enterprise implementations. Capacity planning is an iterative process. A good capacity management plan is based on monitoring and measuring load data over time and implementing flexible solutions to handle variances without impacting performance.
The goal of capacity planning is to identify the right amount of resources required to meet service demands now and in the future. It is a proactive discipline with far-reaching impact, supporting:
• IT and business alignment, helping to show the cost and business need for infrastructure upgrades
• Consolidation and virtualization strategies, ensuring that consolidated real and virtual system configurations will meet service levels
Capacity Planning Approach
Capacity Planning is planning efficient resource use by applications during the development and deployment of an application, but also when it is operational. It addresses considering how different resources can be accessed simultaneously by different applications, but also knowing when it is done in an optimal way. Big organizations and operational environments have high expectations in means of capacity planning.
The thesis considers which type of configurations (clustered, unclustered, etc.) should be taken into consideration, which types / forms / categories of applications can run on the same server / cluster, but also what should be avoided when planning for capacity.
Capacity planning should be conducted when:
• Designing a new system
• Migrating from one solution to another
• Business processes and models have changed, thus requiring an update to the application architecture
• End user community has significantly changed in number, location, or function
Typical objective of capacity planning is to estimate:
• Number and speed of CPU Cores
• Required Network bandwidth
• Memory size
• Storage type and size
Key items influencing capacity:
• Number of concurrent users
• User workflows
• Tuning and implementation of best practices
Capacity planning is about how many resources an application uses. It implies knowing a system’s profile. For instance, if you have two applications, e.g. application A and application B, each known to use certain values of Central Processing Unit (CPU), memory, disk and network resources when being the single application running on your machine, but you only have one machine. If application A uses only little of one resource and application B much of the same one. This is a simple case of capacity planning, and one must have in mind that when the applications are executed in parallel on the machine, the total resource usage is no simple addition of the sole execution of each one of them. There could for instance be overlapping of memory portions, which would make parallel execution impossible, and re-writing the code would be necessary to empower this.
The process of determining what type of hardware and software configuration is required to adequately meet application needs is called capacity planning.
Because the needs of an application are determined among others by the number of users, in other words the number of parallel accesses, capacity planning can also be defined as: how many users can a system handle before changes need to be made? Thus, when an application is deployed, one should have in mind how large it will be at first, but also how fast the number of users/servers/machines will increment, so that enough margins is left and one must not wholly change an application because of e.g. the addition of a single user.
To perform capacity planning, essential data is collected and analyzed to determine usage patterns and to project capacity requirements and performance characteristics. Tools are used to determine optimum hardware/software configurations.
Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives.
There are several ways to address system bottlenecks. Some common solutions include:
• Using Clustered Configurations
• Using Connection Pooling
• Setting the Max Heap Size on JVM
• Increasing Memory or CPU
• Segregation of Network Traffic
Clustered configurations distribute workloads among multiple identical cluster member instances. This
effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability.
Using Connection Pooling
To improve the performance by using existing database connections, you can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings.
Setting the Max Heap Size on JVM (Java Virtual Machines)
This is a application-specific tunable that enables a tradeoff between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points.
Increasing Memory or CPU
Aggregating more memory and/or CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially
64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency.
Segregation of Network Traffic
Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.
• Increase revenue through maximum availability, decreased downtime, improved response times, greater productivity, greater responsiveness to market dynamics, greater return on existing IT investment
• Decrease costs through higher capacity utilization, more efficient processes, just-in-time upgrades, greater cost control
Future Direction/Long Term Focus
Capacity planning process is a forecast or plan for the organization’s future. Capacity planning is a process for determining the optimal way to satisfy business requirements such as forecasted increases in the amount of work to be done, while at the same time meeting service level requirements. Future processing requirements can come from a variety of sources. Inputs from management may include expected growth in the business, requirements for implementing new applications, IT budget limitations and requests for consolidation of IT resources.
The basic steps involved in developing a capacity plan are:
1. To determine service level requirements
a. Define workloads
b. Determine the unit of work
c. Identify service levels for each workload
2. To analyze current system capacity
a. Measure service levels and compare to objectives
b. Measure overall resource usage
c. Measure resource usage by workload
d. Identify components of response time
3. To plan for the future
a. Determine future processing requirements
b. Plan future system configuration
By following these steps, we can help to ensure that your organization will be prepared for the future, ensuring that service level requirements will be met using an optimal configuration.