Application Performance Testing Mistakes

More than ever, web visitors want a snappy, responsive web experience. If they don’t get it, they will click away.(1).

To help combat slow apps, see the article: 10 totally avoidable performance testing mistakes in Tech Beacon.

The article is useful and we recommend it. But there’s more to the story.

So how can you make sure your app measures up?

At IWL, we find two areas cited in the article especially problematic:

  • Having no methodology

  • Testing only over the LAN

I. No methodology

The Tech Beacon article complains that some companies have no methodology, and we agree. Too many engineers only perform “fair weather testing,” with little idea of their test objectives.

In today’s competitive market, it is not enough to fire up your app at Starbucks, access your cloud-based server, estimate your packet delay at a half second, and declare your work “DONE.”

One could argue that this is a methodology — namely the “trying-it-out-at-Starbucks” methodology. But the Starbucks trial run completely fails to rigorously test the app under the real life network conditions clients will certainly encounter. Omitting a thorough analysis invites numerous and unnecessary customer-related support issues and costs.

For example, consider how Wi-Fi dials back bit-rates as RF reception conditions degrade. The app might perform fine one morning, when there are only a few customers at your favorite coffee shop, then perform horribly an hour later when morning customers flood the store. Badly-performing Wi-Fi access-points and Bluetooth systems could all be competing with one another.

At a minimum, your DevOps team should define performance benchmarks under specific test conditions. To do that, the team must design a test plan.

Some questions to ask in formulating a test plan:

  • In what network environments will your app be used? Mobile 3G and 4G? Cloud? Starbucks? Streaming media?

  • Have you identified typical worse case and best case scenarios for those environments?

  • Have you modeled those environments and can you test against them?

You may have to invest some time to determine your application’s sensitivity to various network conditions.

For example, if your app is TCP-based, then packet-reordering and duplication only matter if the wasted bandwidth results in below-par throughput. On the other hand, congestion avoidance backoff may present a performance problem.

Alternatively, if there’s a strong must-be-delivered-on-time requirement, then jitter matters to you (think video conferencing or gaming). Taking an impairment-based view distills down a plethora of network types and topology combinations to a manageable number of variables, which can be replicated and tested in the lab with proper tools, incorporating regression-testing. A further benefit of this distillation is that any new technology usually changes the values of impairment parameters, but does not introduce wholly-new test scenarios.

II. Testing only over the LAN

Some engineers mistakenly believe that if the app works on the LAN with no problem, then they can mathematically extrapolate performance in cloud, mobile, and satellite environments by simply adding the average latency number found in a book.

That short cut does not correctly model the end-to-end behavior of the network. The approach fails to consider round-trip time, mixed technology networks, and poorly performing or overloaded servers. In addition, it fails to consider that latency is just one type of network impairment. There are many others, including packet jitter, limited bandwidth, packet reordering, packet loss, and packet duplication. There’s also traffic on the LAN that does not appear in cloud, mobile, or satellite communications, which can also skew results.

How do you distill the overwhelming number of network technologies and topologies and their limitless combinations, factor in what can go wrong, including congestion, poor signal-to-noise ratio conditions, (related to bit-rate degradation), queuing algorithm anomalies (related to jitter) into a manageable number of repeatable test-cases?

IWL’s KMAX network emulator can help you get a handle on all these issues. Its pre-defined testing methodologies are repeatable and comprehensive. With KMAX, you can implement your test plan quickly and easily, reducing weeks of testing time down to a few hours.

Footnote (1). https://www.nngroup.com/articles/the-need-for-speed

Previous
Previous

IWL Cautions Against Misleading 5G Claims Made by Purveyors of 5G Products

Next
Next

Testing My App in a 5G Network