Concurrency message queue in c++

 

Introduction

The best way to learn programming is to build the solution directly. In this story I would like to share what have I learnt about concurrency message queue in C++. I will create different type of message queue including:

  • Lock free message queue
  • One lock message queue
  • Two locks message queue

with different data structure:

  • Queue
  • Link list
  • Ring buffer

with different library:

  • Linux mutex
  • C++ standard library mutex

Lock free

I made a simplified version. This is a concurrency message queue but only support 2 threads. 1 thread is pushing message and 1 thread is getting message.

One lock

This is the simplest concurrency message queue. One mutex lock is used, the head pointer and tail pointer shared the same lock.

Two locks

There are 2 mutex locks in this message queue. We are hoping this implementation can reduce the threading congestion problem.

Ring buffer

One small thing about the ring buffer is the buffer size should be in a power of 2 to maximum out the index position calculation performance.

int next_in_index = ((current_index + 1) & (1024 - 1)); // mod operation

Experiment set up

Hardware

  • Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
  • 4 core
  • 4 threads
  • Ubuntu 20.04 LTS

Test cases

  • 300000 messages
  • Threads
    • 1 in, 1 out
    • 2 in, 2 out
    • 4 in, 4 out
    • 10 in, 10 out
    • 5 total tests and get the average time spend

Result

Throughput test

+----------------------------+------+---------+----------------+-----------+------------+-----------+------------+---------------+
|     message_queue_name     | lock | library | data_structure | in_thread | out_thread | total_msg | total_test | avg_time_msec |
+----------------------------+------+---------+----------------+-----------+------------+-----------+------------+---------------+
| MsgQLockFree_Linux         |    0 | linux   | ring buffer    |         1 |          1 |    300000 |          5 |            61 |
| MsgQTwoLock_RingBuf_Linux  |    2 | linux   | ring buffer    |         1 |          1 |    300000 |          5 |            88 |
| MsgQTwoLock_RingBuf_Std    |    2 | std c++ | ring buffer    |         1 |          1 |    300000 |          5 |            91 |
| MsgQLockFree_LinkList_Std  |    0 | std c++ | link list      |         1 |          1 |    300000 |          5 |            94 |
| MsgQOneLock_RingBuf_Linux  |    1 | linux   | ring buffer    |         1 |          1 |    300000 |          5 |           108 |
| MsgQOneLock_RingBuf_Std    |    1 | std c++ | ring buffer    |         1 |          1 |    300000 |          5 |           131 |
| MsgQTwoLock_LinkList_Linux |    2 | linux   | link list      |         1 |          1 |    300000 |          5 |           148 |
| MsgQTwoLock_LinkList_Std   |    2 | std c++ | link list      |         1 |          1 |    300000 |          5 |           169 |
| MsgQOneLock_Queue_Linux    |    1 | linux   | queue          |         1 |          1 |    300000 |          5 |           187 |
| MsgQOneLock_Queue_Std      |    1 | std c++ | queue          |         1 |          1 |    300000 |          5 |           191 |
| MsgQOneLock_LinkList_Linux |    1 | linux   | link list      |         1 |          1 |    300000 |          5 |           337 |
| MsgQOneLock_LinkList_Std   |    1 | std c++ | link list      |         1 |          1 |    300000 |          5 |           349 |
+----------------------------+------+---------+----------------+-----------+------------+-----------+------------+---------------+

 

The above diagram and table showing the result of a test with 2 threads, 1 thread is pushing 30k data into the message queue, 1 thread is getting the data out from the message queue.

Here is a quick conclusion.

For data structure:

  • ring buffer > queue > link list


For library:

  • linux and standard C++ has similar performance


For lock:

  • lock free > 2 locks > 1 lock


The result matches to our expectation

Congestion test

+---------------------------+-----------+------------+-----------+------------+----------------------------+---------------------------+-------------------------+-------------------------+----------------------------+----------------------------+--------------------------+--------------------------+-------------------------+-----------------------+
|    message_queue_name     | in_thread | out_thread | total_msg | total_test | MsgQOneLock_RingBuf_Linux  | MsgQTwoLock_RingBuf_Linux | MsgQOneLock_RingBuf_Std | MsgQTwoLock_RingBuf_Std | MsgQOneLock_LinkList_Linux | MsgQTwoLock_LinkList_Linux | MsgQOneLock_LinkList_Std | MsgQTwoLock_LinkList_Std | MsgQOneLock_Queue_Linux | MsgQOneLock_Queue_Std |
+---------------------------+-----------+------------+-----------+------------+----------------------------+---------------------------+-------------------------+-------------------------+----------------------------+----------------------------+--------------------------+--------------------------+-------------------------+-----------------------+
| MsgQOneLock_RingBuf_Linux |         1 |          1 |    300000 |          5 |                        108 |                        88 |                     131 |                      91 |                        337 |                        148 |                      349 |                      169 |                     187 |                   191 |
| MsgQOneLock_RingBuf_Linux |         2 |          2 |    300000 |          5 |                        132 |                       100 |                     154 |                     107 |                        346 |                        212 |                      359 |                      217 |                     206 |                   210 |
| MsgQOneLock_RingBuf_Linux |         4 |          4 |    300000 |          5 |                        193 |                       126 |                     212 |                     137 |                        327 |                        302 |                      337 |                      302 |                     231 |                   247 |
| MsgQOneLock_RingBuf_Linux |        10 |         10 |    300000 |          5 |                        195 |                       172 |                     205 |                     182 |                        361 |                        352 |                      367 |                      350 |                     244 |                   248 |
+---------------------------+-----------+------------+-----------+------------+----------------------------+---------------------------+-------------------------+-------------------------+----------------------------+----------------------------+--------------------------+--------------------------+-------------------------+-----------------------+

The above diagram is showing the congestion test result. I increase the number of threads over pushing in and pulling out from the message queue while threads number range from 1, 2, 4, 10 threads. Here is a conclusion:

For number of locks:

  • 2 locks is better than 1 lock

For data type:

  • ring buffer > queue > link list

Code

Gitlab example

Reference

  • C++ Concurrency in Action: Practical Multithreading
  • kfifo

Comments