SamKnows is now part of Cisco | Learn more

Critical Services Report: Video Conferencing (UK)

Since the start of the COVID-19 pandemic, regulators and ISPs have been asking us to help them understand the performance of critical applications. Today we're looking at recent video conferencing performance in the United Kingdom, and demonstrating our new video conferencing tests for Google Meet, Microsoft Teams, Zoom, Webex, and GoToMeeting.

Introduction

Since the start of the COVID-19 pandemic, we have been receiving requests from regulators and internet service providers (ISPs) wishing to understand the impact of increased internet usage on end-to-end performance of critical applications. With our large range of network and application performance measurements, we’ve been well placed to answer these questions.

However, one category of applications we did not support was video conferencing. With everyone suddenly working from home, this stood out as an obvious gap in our capabilities, so we set out to rectify this.

We now support measurements for Google Meet, Webex, Microsoft Teams, Zoom, Skype, and GoToMeeting

We now support measurements for Google Meet, Webex, Microsoft Teams, Zoom, Skype, and GoToMeeting.

Developing the video conferencing measurements

Almost all modern video conferencing services use a traditional client/server model. i.e. All video/audio traffic is funnelled through the servers of the video conferencing service in question.

Therefore, it is really important to have minimal latency and packet loss to the relay servers of the video conferencing services. These are precisely the metrics that we measure – latency and packet loss to the video conferencing services’ relay servers.

Most of the video conferencing services in question host their relay servers geographically around the globe. Some use anycast in order to steer users towards the closest servers, others use latency measurements in the client, and some use a round-robin approach.

We studied the behaviour of each application in order to understand how it chose servers and how it exchanged video/audio traffic. We then validated our assumptions by provisioning virtual machines (using Amazon Workspaces) around the globe and carrying out side-by-side measurements. Our tests exchange UDP traffic with the relay servers of the video conferencing services. You can read more about the methodology here.

Results from the UK

We deployed our new video conferencing measurements on Whiteboxes in the UK on April 24th. Data was sampled from the following 5 days of measurements.

Latency from UK Whiteboxes to the six video conferencing services measured, split by hour of day.

Latency from UK Whiteboxes to the six video conferencing services measured, split by hour of day. Enlarge chart.

The chart above shows latency to the six different video conferencing services measurement, split by hour of day. Latency to all the services is flat throughout the day, suggesting there are no significant capacity issues between UK broadband ISPs and these video conferencing services.

Most of the services have latency between 20ms and 30ms. Traceroutes to their IP addresses show hosting locations varying between the UK, Ireland, Amsterdam and Frankfurt. These are fairly typical hosting locations for serving a western European market.

The one exception in the chart below is Zoom, which sees average latencies of around 135ms. We found that Zoom was using audio/video relay servers in the US. In fact, we found that this was true for every country that we tested from – not just the UK. Zoom are currently relaying all audio/video traffic via their servers in the US, regardless of the country the user is located in. This is perhaps in response to the recent security concerns that have been levelled at Zoom, particularly around their use of servers in China.

Screenshot of a Zoom client running in Sydney, showing a latency of 171ms.

Screenshot of a Zoom client running in Sydney, showing a latency of 171ms. Wireshark, on the right, shows that video/audio traffic is being exchanged with 158.101.12.42, a server in Seattle, USA. Enlarge image.

We were so suspicious of this finding that we started Zoom meetings in locations all around the world, and verified the latency as reported in the Zoom client itself and with Wireshark running side by side. The screenshot above shows a Zoom client in Sydney, Australia exchanging audio/video traffic with a Zoom server in Seattle, USA, with a latency of 171ms. In all cases we found that Zoom used servers in the USA. It is very impressive that Zoom can maintain good video streaming performance with such high latencies!

Latency to video conferencing services.

Latency from UK Whiteboxes to the six video conferencing services measured, split by ISP. Enlarge chart.

The chart above shows average latency to the video conferencing services for the four largest ISPs in the UK. As can been in the chart, there is no significant difference between any of the ISPs. Any differences in latency are driven by the video conferencing services themselves.

Lastly, and not charted here, we found that packet loss to all the video conferencing services was very low – typically under 0.1% for every hour of the day. 

The impact of Zoom's extra latency

Zoom’s decision to send all traffic to the USA has resulted in more than 100ms of additional latency versus their peers. But how does this extra latency affect user experience?

We carried out a simple experiment to find out.

The experiment involved two users in London, both on good-quality broadband connections with latencies of 10-15ms to major London datacenters. Video calls using both Zoom and Google Meet were established simultaneously between the two users. Our hypothesis is that Google Meet should show less delay in the video than Zoom, because Google Meet traffic stays locally in western Europe, whilst Zoom traffic has to go to the USA and back.

The video we captured of the two calls showed little difference between Zoom and Google Meet at first. However, if you looked closely you could see a slight delay to the Zoom video.

To quantify this delay, we stepped through the video frames when the person in the video was blinking. The input video was filmed at 25 frames per second, so each frame represents 40ms of playback. Given that we know how long each frame represents, we can quantify the additional video delay that Zoom had by measuring the difference in frames between Google Meet registering his blink and Zoom registering it. 

We can see from the above that Zoom is three frames behind Google Meet. At 40ms per frame, that means that Zoom is 120ms delayed, which closely matches the additional network latency that Zoom experiences due to their routing of all traffic back to the USA.

Most people would be hard-pressed to spot the difference between calls using the two platforms, unless they were placed side by side or stepped through very slowly. However, these small differences may account for users accidentally talking over one another more frequently when using Zoom, and other similarly small frustrations that are due to increased delay. It is also worth noting that our test call used two users in London, which both had a latency of ~135ms to Zoom’s USA servers. Had we tried a Zoom call between two users in Australia or parts of Asia, then the difference would likely have been more significant.

Conclusion

Our new video conferencing measurements have shown that all video conferencing services are performing well in the UK, regardless of the hour of day or ISP in use. Perhaps the most interesting discovery here is that Zoom is now sending all their traffic to the USA, regardless of the location of the client. This has a very visible impact on the latency, with Zoom averaging 135ms whilst other services were all below 30ms.

These results are just a snapshot of five days of measurements, and we will continue to monitor these measurements to identify any future degradation.

Methodology

The charts presented here are taken from the SamKnows UK panel of Whiteboxes, between 24 April 2020 and 29 April 2020. The Whitebox is a small hardware measurement device installed in a user's home, which carries out automated performance tests over their broadband connection many times per day. The results here are derived from a sample of approximately 450 Whiteboxes running the video conferencing measurements every hour. Each video conferencing service was accessed with an account registered on the free plan offered by that service.

More detail about the methodology behind the video conferencing tests can be found here.