~smlavine/testing-go-queues

Testing the efficiency of various go queue implementations
Add introduction
Enumerate the tests
Mention potential further research

refs

master
browse  log 

clone

read-only
https://git.sr.ht/~smlavine/testing-go-queues
read/write
git@git.sr.ht:~smlavine/testing-go-queues

You can also use your local clone with git send-email.

I've been getting more comfortable with using Go over the last month. For fun and practice, I had the idea to translate a Breadth-first search problem I had in one of my CS courses from C. For this I need a queue. My instinct was to jerry rig one from slices, and I began to do this, but I wondered whether there was anything else better out there. After looking into it, I decided to test the efficiency of three ways of doing queues:

  • Slices (pop with e := s[0] and s = s[1:], push with s = append(s, n))
  • A linked-list, using the container/list standard library module
  • The ring-buffer based eapache/queue

TL;DR: Use slices.

#Benchmark Results

See raw-results.txt for the unformatted output of the benchmark tests.

1. n=10 (1000 iterations)
list0m0.012s
queue0m0.010s
slice0m0.010s
2. n=10, random insert (1000 iterations)
list0m0.022s
queue0m0.020s
slice0m0.021s
3. n=100 (1000 iterations)
list0m0.078s
queue0m0.076s
slice0m0.059s
4. n=100, random insert (1000 iterations)
list0m0.199s
queue0m0.188s
slice0m0.174s
5. n=1000 (1000 iterations)
list0m0.813s
queue0m1.086s
slice0m0.777s
6. n=1000, random insert (1000 iterations)
list0m2.572s
queue0m2.104s
slice0m2.024s
7. n=10000 (1000 iterations)
list0m10.676s
queue0m11.711s
slice0m8.803s
8. n=10000, random insert (1000 iterations)
list0m29.539s
queue0m24.285s
slice0m23.293s
9. n=100000
list0m0.074s
queue0m0.065s
slice0m0.063s
10. n=100000, random insert
list0m0.241s
queue0m0.205s
slice0m0.225s
11. n=1000000
list0m0.844s
queue0m0.684s
slice0m0.614s
12. n=1000000, random insert
list0m2.152s
queue0m1.916s
slice0m2.081s
13. n=10000000
list0m7.843s
queue0m6.955s
slice0m6.082s
14. n=10000000, random insert
list0m20.915s
queue0m19.353s
slice0m19.030s
15. n=100000000
list1m16.619s
queue1m6.998s
slice1m1.921s
16. n=100000000, random insert
list3m42.505s
queue3m13.185s
slice3m6.901s

#Summary

Across 16 tests:

  • Lists won zero 🏅, two 🥈 and fourteen 🥉
  • eapache's ring-buffer queue won four 🏅, ten 🥈 and two 🥉
  • Slices won thirteen 🏅, three 🥈 and zero 🥉

I conclude that unless you know better, you should use slices as a FIFO queue in Go.

In addition to these performance statistics, I noticed while running the tests that slices were the least resource-intensive (memory and CPU%), and lists were the most. On the final test with one hundred million initial elements and random insert, the list implementation used about 8 GB of memory+swap, the ring-buffer implementation about 5-6GB, and the slice implementation using about 3GB.

#Further research

Buffered queues?


If you have any criticism or feedback on these tests, please send me an email at ~smlavine/public-inbox@lists.sr.ht.