MQTT end to end latency Measurement
Testing the latency of your MQTT broker? In this guide we explain MQTT protocol topologies and several tests focused on measuring latency of MQTT brokers.
Last updated
Testing the latency of your MQTT broker? In this guide we explain MQTT protocol topologies and several tests focused on measuring latency of MQTT brokers.
Last updated
MQTT brokers are the heart of a connected IoT application. And just as functioning of the heart is critical for the human body, a reliable and performant MQTT broker is critical for IoT operations. We know that health of the human heart could be measured in average beats per minute, but how do you measure the performance of an MQTT broker? How do you differentiate between a reliable vs bad performance? Two key metrics to measure broker performance are end to end delivery latency and packet loss rate.
For those who are beginner to MQTT protocol, an MQTT broker acts as a bridge connecting different publishers and subscribers. The publishers send messages on certain topics, and subscriber could listen to any number of topics of interest. The MQTT broker latency consists of
Time taken to establish a connection with publisher (Only in case of dynamic connections)
Time taken to accept a message from publisher
Time taken to distribute the message to all connected subscribers
We calculate the end to end latency by calculating the sum of all three intervals above. In case of multiple subscriber subscribing to a common topic, we calculate the average of the latencies measured by all of the connected subscribers.
The calculation of the MQTT communication latency may vary significantly depending upon the communication topology. Lets consider the following topologies in MQTT latency testing
Single publisher, multiple Subscriber (1 to N)
Multiple publishers, single subscriber (N to 1)
Single publisher, Single Subscriber, high throughput (1 to 1)
Multiple Publisher, Multiple Subscriber (N to N - Random topics)
Loopback publisher/subscriber (N to N - Loopback)
All of these topologies could then further be evaluated with QoS 0, 1 and 2 as well. Note that the QoS settings ensures the delivery of message to the broker, but doesn't guarantee the end to end delivery to the subscriber. For the purpose of calculating delivery failure (due to subscriber disconnect or queue overflow), we will also add certain metrics to the test which count the total packets received at all of the clients.
For this guide, let's get started with testing the basic scenario of single publisher, multiple consumer (1 to N scenario). In this scenario, the MQTT broker under test will receive a publish from a single client and replicate the received messages to multiple connected subscribers, including the original sender. The publishing client will put a timestamp in the outgoing message payload. All receivers will calculate the time in flight of the message by measuring the difference between arrival timestamp vs sending timestamp in the payload. The latency measured would be logged as a metric parameter and can be seen in the IOTIFY Metrics Dashboard.
If you haven't, create an account with IOTIFY. The trial account creation is free and you don't require a credit card for signup.
2. Import the following template into your IoTIFY workspace.
3. The template currently connects to broker.hivemq.com. You could change the settings of your MQTT broker if required in the protocol tab. Note that the broker must be public as we don't support localhost broker or private IPs.
4. Update the default run setting for the required number of clients. (We use 1000 Clients for this step), each sending message 10 second apart for 30 messages.
5. Run the imported MQTT test with the newly created run setting.
The status of the test would be visible in Results tab. Once the test is finished, we could go to Metrics page and plot mqtt_0_latency parameter for last run test.
6. Now change the Run settings to run the test with more clients (10,000 clients in this case).
7. Let's plot the latency again with the newer number of clients.
As we could see, the average latency changes from roughly 1700 ms for 1K clients to roughly 2500 ms for 10K clients. That's approx 50% increase for a scale of 10x.
Let's also measure the packet loss in all cases by plotting mqtt_0_rx parameter.
Measurement of end to end delivery latency for MQTT broker is important for benchmarking the scalability of your solution. As we see in this experiment, a 10x scale up of connected clients resulted in almost 50% increment in end to end latency. The test could now be adopted to test different communication topologies.