These pictures were taken at St. Peter’s Square. While the circumstances of the photos may differ, they clearly demonstrate the massive increase in the number of connected devices over the past decade. With the explosion of IoT devices, the demand for wireless connectivity is skyrocketing. For a wireless infrastructure provider to perfect their network, optimizing for a small number of users is no longer sufficient.  Designs must now be validated in a high-density environment to meet today’s connectivity demands. 

If the functionality of a wireless network or device is perfected and proven in low-density and medium-density environments, can we assume it will exhibit a similar degradation trend in a high-density environment as well?

These applications and use cases encompass 90% of the network activity observed in various user environments, providing a robust framework to identify what is most important to the customer. I welcome your thoughts on this differentiated approach we’ve discussed above.

For example, consider a video performance test where 75 users simultaneously stream a 2000 Kbps video, and everyone has a good experience. Can we predict the experience of 100 users simultaneously streaming a 1500 Kbps video, given that the total required throughput is the same? Similarly, if the uplink throughput with 5 clients is 200 Mbps, what can we expect with 30 clients?

In all the tests, the expected throughput of the AP remained consistent. For instance, with 15 clients, we utilized a 10,000 Kbps video, whereas with 100 clients, we opted for a 1500 Kbps video. The performance trend while scaling unfolded as follows. 

While there was a nearly consistent decline in TCP download throughput from 300 Mbps to 100 Mbps, the upload throughput experienced a sharp decrease from 5 clients to 30 clients, followed by a more gradual degradation. Video streaming remained satisfactory up to a load level of 75 clients but notably declined from 100% to 30% when transitioning from 75 clients to 100 clients.

From these tests, it’s evident that the performance degradation patterns with the number of clients are non-linear. Therefore, the most accurate way to test network performance that supports real-world scenarios is to measure it in a high-density environment. Now, we aim to understand what factors in scaling cause such drastic performance effects. To investigate this, we utilized Airtool in MAC to capture Wi-Fi packets for each test. For each test, we measured:

  1. Average Packet Size
  2. Number of Management Frames
  3. Number of control Frames
  4. Number of Wi-Fi retries

For uplink TCP throughput, we observed materially significant increases in control frames and Wi-Fi retries. Although there was a minor increase in management frames, it was not deemed too significant.

For downlink TCP throughput, we observed a minimal increase in Wi-Fi retries. While there was a consistent rise in the number of control frames, it was comparatively minor compared to the degradation in average packet size. The average packet size significantly decreased and demonstrated proportional degradation alongside the throughput performance. Regarding video streaming performance, once again, average packet size emerged as the most influential factor. While there was an increase in control frames, its impact was minor compared to the change in average packet size. Upon further root cause analysis, we discovered that the increasing number of control packets and retries in the uplink was attributed to uncoordinated uplink transmissions among multiple users. This phenomenon seems logical given that the Wi-Fi protocol is based on contention resolution. Additionally, there are no mechanisms in place to manage client aggression in the uplink.

Similarly, in downlink-dominated traffic streams, ensuring fairness would entail the access point sending a lesser amount of data per client to accommodate all clients effectively. Features like Air Time Fairness (ATF), for instance, facilitate such behavior enhancements.

Advancements in the Radio and Physical Layer can certainly improve these issues. However, simple solutions at higher layers are also possible. For instance, in the uplink, managing TCP acknowledgments can be beneficial. Implementing an algorithm to selectively drop TCP acknowledgments can effectively manage uplink client aggression.

Likewise, for downlink, implementing an effective Quality of Service (QoS) based scheduler can significantly impact performance. By utilizing information on recent trends, this scheduler can intelligently serve each client, leading to improved overall performance.

However, ultimately, the most crucial aspect is testing the network in high-density environments. Ideally, this testing should involve real clients and measure the performance of real applications such as browsing, video streaming, throughput, Voice Over IP, multicast streaming, and other applications that are significant to the target user segment. However, this process is time-consuming and requires considerable effort. Emulated clients provide a viable compromise for the majority of test requirements.

Alethea Communications Technologies can support you in testing with both real clients and emulated clients. Please feel free to contact us at info@aletheatech.com to discuss your specific testing requirements.

Discover more from Alethea Communications Technologies

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version