Top 3 Missed Opportunities in Load and Performance Testing

Maximizing Load and Performance Testing Effectiveness

The goal of load and performance testing is to ensure application reliability and user satisfaction. Often this work is “siloed” in the organization, with one group responsible for the host system and another group responsible for the network. What is the problem?

Understanding Application Performance Under Load

The group responsible for the application running on the host system will test the number of concurrent sessions the application can support. The goal is to find the limit. The application can easily support one to N users; however, overall response time declines when the N + 1 user logs on. This delay or latency may cause these users to exit the application, resulting in lost engagement and sales. Thus, it is important for the organization to identify this limit and remediate it.

Unfortunately, the number of concurrent sessions is typically tested with “virtual” users who are directly connected to the host system.   However, even if “real” human users access the application, they will likely do so from the local area network. This is not representative of the typical mix of users on the Internet who would access the application from all over the world via a wide variety of links, a variety of technologies, and a variety of performance profiles.

Requests per second, or RPS, measures the rate at which the application receives and processes incoming user requests. A high RPS indicates a heavy load on the application. Monitoring RPS is essential for assessing an application's capacity to handle traffic and potential scalability issues.  

However, this accounts for the traffic AFTER it has reached the application; it does not account for the transmission of the packets/traffic from the user traveling over the Internet to the system hosting the application.  

Fluctuations or sudden spikes in RPS may indicate unexpected changes in user behavior or traffic patterns, revealing potential stability issues. To ensure the application's stability, it is important to determine the maximum, sustainable RPS and find solutions for remediating this limit when it is reached.

A stunningly typical example of poor testing in this area is the gaming industry. In these cases, the companies launched a new online game, and the systems hosting the game crashed; they could not accommodate the number of users logging in. Two examples (out of many): 

The common element in these game launch fiascos is that more users attempted to access the game than the game developers anticipated. While it may be impossible to forecast the number of users who access an application simultaneously, it is quite possible to determine the maximum sustainable Requests Per Second. When the RPS limit is reached, the application can block access and provide a message to each user that “Due to high demand, the game is not available now. Try again later.”

#1. Balancing Application Load and Network Load

The group responsible for the application running on the host system typically ignores the network access component. However, the networking group is concerned primarily with the quality of network access to the company’s web servers. The group does not typically examine the user experience connecting to one specific hosted application.

So what is the user experience connecting to the hosted application over a given network connection?  

Consider a user working from home and accessing a corporate application hosted in a cloud server. That user may be using a cable modem with a 1 Mbps upload speed (user to the application) with a 200 ms delay. The download speed would be 3 Mbps (application to the user) with 100 to 300 ms of delay.  

The application group will test with real or virtual users accessing the app over the LAN. LAN performance is  1 Gbps. Since the connection is approximately ten times faster than the typical remote user over the Internet, how will the user experience change when the connection is ten times slower?

The only way to know is to test it. The best way to do that is to emulate it in a test lab in a manner that is controlled and repeatable.

This would then be repeated in all the scenarios that could be used for accessing the application – WiFi, 3G/4G cellular.

To assure user satisfaction, application performance must include network performance.  

Measuring end-to-end performance aids in pinpointing performance issues accurately. It helps determine whether a performance problem originates from the user's device, the local network, or the hosted application. By isolating the problem's source, organizations can take appropriate actions to resolve the issues.  

#2. End-to-End Performance Measurement

End-to-end performance measurement can also identify security threats. Unusual patterns or performance degradation may indicate malicious activity or potential security breaches. Monitoring end-to-end performance can help organizations detect and respond to security incidents promptly.

End-to-end performance measurement focuses on the user experience. It ensures that network performance aligns with user expectations and requirements. By understanding how the network performs from the user's perspective, organizations can deliver a more satisfactory and reliable experience to their users.

End-to-end performance measurement means understanding the performance from the source to the destination, considering every intermediary network device, such as routers, switches, and firewalls, and the number of users accessing the application. This holistic view provides insights into the network's overall health and helps identify potential bottlenecks or points of failure.

#3. Overcoming Historical Challenges in Load Testing

Most organizations have embarked on “digital transformation.” This means integrating digital technologies into business processes and customer experiences to remain competitive. One of the challenges of digital transformation is meeting customer expectations for using mobile apps and laptops to access the organization’s resources or services. To meet customer expectations, organizations must let go of the past practice of emphasizing application and host/server performance and disregarding the network component. The teams must be merged to create a multidisciplinary approach, integrating network performance testing, application performance testing, and load testing to improve user experience in cloud-based environments.

Previous
Previous

The Advantages and Disadvantages of Open-Source Tools for Protocol Testing

Next
Next

The Importance of Protocol Tests in Network Interoperability