The Role of Simulation in Improving Protocol Testing

Protocol Testing Simulation // Bing AI

Optimizing Protocol Testing with Network Simulation

Before product deployment, one must consider both the anticipated operational environment of the new product as well as the product’s compliance with communication protocols. Some products will operate in a wired, static environment, where the local area network consistently carries data at one gigabit per second. Other products must operate in mobile environments. As users move, the cellular signal transitions between various cell towers, sometimes with adverse effects, such as reduced data speeds, interference, network congestion, or decreased bandwidth. The product must continue to comply with the communication protocols in the mobile environment. Failure to do so may render the product inoperable.

Exploring the Power of Emulation Testing

By recreating challenging scenarios that an application or device network might encounter in real-world situations, testers can ensure that the network protocol implemented in the device operates reliably under various circumstances, such as packet loss, jitter, latency, or bandwidth limitations. 

Some important areas to profile or characterize:

  1. Can the protocol implementation recover from network disruptions?

  2. When packet loss is high, how does the protocol handle packet retransmissions (or other methods)? 

  3. Can the protocol implementation adapt to changing network scenarios by “backing off”?

The most significant concern when considering protocol testing along with network emulation is whether or not the particular protocol defines the behavior of a device under adverse network conditions. Often, the protocol specification does not. Nevertheless, the implementer still must decide how to handle these situations.

When two hosts begin a protocol exchange, each one waits a certain amount of time for the other to respond. If the response is not received in the expected time interval, then a “time out” occurs. 

For example, the first host initiates a TCP connection in TCP by sending SYN to the second host. What if the SYN packet is lost? After a specified amount of time, the first host may resend the SYN. After several attempts, the first host may stop trying to connect. 

However, many other non-standard events or impairments could occur. For example, the SYN may be duplicated, it may be delayed, or both hosts may attempt to connect to one another at the same time. Each one of these situations must be addressed, and under no circumstances should either host crash. 

What is Emulation Testing and Why Does it Matter?

Network emulation testing provides several benefits for pre-deployment testing of applications and devices in a controlled environment.

In the past, network emulation has been limited to examining the overall device performance in the face of adverse network conditions; network emulation has yet to be used as an in-depth look at specific issues with the network protocols implemented in the application or device.

Extending protocol conformance/compliance testing to consider the impact of adverse network conditions on the protocol implementation provides deeper insight to characterize the operational limits of the application or device fully.

Transport layer protocols (ISO Layer 4) provide an excellent example of testing network protocol implementations in a realistic real-world environment. Layer Four protocols typically address congestion control and flow control. 

Flow control provides a way to manage the data transfer rate between two network devices. However, this is not done via the data transmission speed but by other mechanisms for many protocols. Flow control prevents buffer overload if the sending device overwhelms the receiving device with too much data. That data could be dropped or lost, which could trigger a retransmission.

Congestion control provides mechanisms to manage the flow of traffic on the network. Usually, the sender determines that the network is congested, and the sender reduces the rate at which it sends the packets. (In TCP, this is referred to as the “backoff” algorithm).

A newly emerging protocol called QUIC has other mechanisms for this. The latest IETF draft for QUIC (Quick UDP Internet Connection) provides multiple congestion control algorithms; the sender can choose the most appropriate one. 

With a network emulator, one can emulate these real-world conditions by increasing the packet loss and the packet delay. When subjected to loss or delay, the protocol conversation between the two devices should trigger the algorithms in the protocol implemented in the device. The test engineer can then verify that the protocol implementation in the device under test is performed correctly.

Understanding the Protocol Emulator’s Capabilities

In order to properly test protocol implementations with a network emulator, the network emulator must be able to identify and manipulate specific protocols in specific bands or flows. 

For example, the network emulator must support the capability of identifying all the TCP traffic and sending that traffic into one band without affecting the other protocols that are part of the communication between the devices. 

The protocol emulator would then drop or delay the TCP traffic – this could be done at a constant rate, or it could be modeled with fluctuations, or it could follow a standard statistical distribution over an extended period. 

As a result of this testing, one could reach conclusions about the efficacy of the implementation and its ability to respond dynamically to congested network conditions.

Features that Define Network Emulators

In addition to selecting specific types of network impairments, for example, drop, delay, jitter, duplication, bandwidth limitation, packet reordering, and link emulation, one must incorporate pre-defined scenarios based on academic research. These scenarios emulate specific situations the device will likely encounter in the real world. For example, a factory floor emulation or a streaming video emulation. These scenarios must be based on real-world published information about the behavior and performance of these scenarios. These must include the likelihood of various anomalies, ranging from the routine to the extreme. The device’s performance in these scenarios must be recorded and rechecked so the developers can get feedback and improve their code for a more robust user experience.

Previous
Previous

Periodic Packet Delay in Streaming Media

Next
Next

The Benefits and Challenges of Cloud Testing for Protocols