Rendered at 10:17:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
dalvrosa 2 days ago [-]
From 12M ops/s to 305 M ops/s on a lock-free ring buffer.
In this post, I walk you step by step through implementing a single-producer single-consumer queue from scratch.
This pattern is widely used to share data between threads in the lowest-latency environments.
loeg 20 hours ago [-]
Your blog footer mentions that code samples are GPL unless otherwise noted. You don't seem to note otherwise in the article, so -- do you consider these snippets GPL licensed?
dalvrosa 20 hours ago [-]
Actually I'm not sure. GPL was for source code of the website itself
But feel free to ping me if you need different license, quite open about it
ramon156 19 hours ago [-]
Something to add to this; if you're focussing on these low-level optimizations, make sure the device this code runs on is actually tuned.
A lot of people focus on the code and then assume the device in question is only there to run it. There's so much you can tweak. I don't always measure it, but last time I saw at least a 20% improvement in Network throughput just by tweaking a few things on the machine.
hansvm 13 hours ago [-]
That reminds me of one of the easiest big wins I've had in my career. SystemD was causing issues, so I slapped in Gentoo with the real-time kernel patch. Peak latency (practically speaking, the only core metric we cared about -- some control loop doing a bunch of expensive math and interacting with real hardware) went down 5000x.
That specific advice isn't terribly transferable (you might choose to hack up SystemD or some other components instead, maybe even the problem definition itself), but the general idea of measuring and tuning the system running your code is solid.
kajaktum 11 hours ago [-]
What do you think is causing the issue? We are having the same kind of problem. Core isolation, no_hz, core pinning, but i am still getting interrupted by nmi interrupts
hansvm 10 hours ago [-]
Details depend, but the root cause is basically the same every time: your hardware is designed to do something other than what you want it to do. It might be close enough that you want to give it a shot anyway (often works, often doesn't), but solutions can be outside of the realm of what's suitable for a "prod-ready" service.
If you're experiencing NMIs, the solution is simple if you don't care about the consequences; find them and remove them (ideally starting by finding what's generating them and verifying you don't need it). Disable the NMI watchdog, disable the PMU, disable PCIe Error Reporting (probably check dmesg and friends first to ensure your hardware is behaving correctly and fix that if not), disable anything related to NMIs at the BIOS/UEFI/IPMI/BMC layers, register a kernel module to swallow any you missed in your crusade, and patch the do_nmi() implementation with something sane for your use case in your custom kernel (there be dragons here, those NMIs obviously exist for a reason). It's probably easier to start from the ground up adding a minimal set of software for your system to run than to trim it back down, but either option is fine.
Are you experiencing NMIs though? You might want to take a peek at hwlatdetect and check for SMIs or other driver/firmware issues, fixing those as you find them.
It's probably also worth double-checking that you don't have any hard or soft IRQs being scheduled on your "isolated" core, that no RCU housekeeping is happening, etc. Make sure you pre-fault all the memory your software uses, no other core maps memory or changes page tables, power scaling is disabled (at least the deep C-states), you're not running workloads prone to thermal issues (1000W+ in a single chip is a lot of power, and it doesn't take much full-throttle AVX512 to heat it up), you don't have automatic updates of anything (especially not microcode or timekeeping), etc.
Also, generally speaking, your hardware can't actually multiplex most workloads without side effects. Abstractions letting you pretend otherwise are making compromises somewhere. Are devices you don't care about creating interrupts? That's a problem. Are programs you don't care about causing cache flushes? That's a problem. And so on. Strip the system back down to the bare minimum necessary to do whatever it is you want to do.
As to what SystemD is doing in particular? I dunno, probably something with timer updates, microcode updates, configuring thermals and power management some way I don't like, etc. I took the easy route and just installed something sufficiently minimalish and washed my hands of it. We went from major problems to zero problems instantly and never had to worry about DMA latency again.
What else could be improved? Would like to learn :)
Maybe using huge pages?
dijit 16 hours ago [-]
kernel tickrate is a pretty big one, most people don't bother and use what their OS ships with.
Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.
Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.
Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.
dalvrosa 16 hours ago [-]
Thanks. Do you happen to know why hyperthreading should be disabled?
In prod most trading companies do disable it, not sure about generic benchmarks best practices
dijit 16 hours ago [-]
It eliminates cache contention between siblings, which leads to increased latency (randomly)
dalvrosa 2 hours ago [-]
Thanks!
jeffbee 15 hours ago [-]
There are some microarchitectural resources that are either statically divided between running threads, or "cooperatively" fought over, and if you don't need to hide cache miss latency, which is the only thing hyperthreading is really good at, you're probably better off disabling the supernumerary threads.
dalvrosa 2 hours ago [-]
Thanks for the explanation :)
kevincox 20 hours ago [-]
Random idea: If you have a known sentinel value for empty could you avoid the reader needing to read the writer's index? Just try to read, if it is empty the queue is empty, otherwise take the item and put an empty value there. Similarly for writing you can check the value, if it isn't empty the queue is full.
It seems that in this case as you get contention the faster end will slow down (as it is consuming what the other end just read) and this will naturally create a small buffer and run at good speeds.
The hard part is probably that sentinel and ensuring that it can be set/cleared atomically. On Rust you can do `Option<T>` to get a sentinel for any type (and it very often doesn't take any space) but I don't think there is an API to atomically set/clear that flag. (Technically I think this is always possible because the sentinel that Option picks will always be small even if the T is very large, but I don't think there is an API for this.)
loeg 19 hours ago [-]
Yeah, or you could put a generation number in each slot adjacent to T and a read will only be valid if the slot's generation number == the last one observed + 1, for example. But ultimately the reader and writer still need to coordinate here, so we're just shifting the coordination cache line from the writer's index to the slot.
kevincox 19 hours ago [-]
I think the key difference is that they only need to coordinate when the reader and writer are close together. If that slows one end down they naturally spread apart. So you don't lose throughput, only a little latency in the contested case.
loeg 19 hours ago [-]
> I think the key difference is that they only need to coordinate when the reader and writer are close together.
This was already the case with the cached index design at the end of the article, though. (Which doesn't require extra space or extra atomic stores.)
kevincox 18 hours ago [-]
That's a good point. They are very similar. I guess the sentinel design in theory doesn't need to synchronize at all as long as there is a decent buffer between them. But the cached design synchronizes less commonly the more space there is which sounds like it would be very similar in practice. The sentinel design might also have a few thrashing issues when the reader and writer are on the same page which would probably be a bit less of an issue with the cached index design.
mikhmha 13 hours ago [-]
Lock-free ring buffer is my favorite data structure. I remember implementing it in C++ and then using a legitimate implementation in the form of boost:SPSC for prod. The idea is so simple. And then I started thinking about designing some programming language or framework around the concept, only to then stumble upon the idea of "message passing" for concurrency. Which of course led me to learn about Erlang.
And then I went down the Erlang rabbit hole. It might have been a mistake...I made more money doing C++.
dalvrosa 2 hours ago [-]
Lol. Funny story :)
pixelpoet 19 hours ago [-]
Great article, thanks for sharing. And such a lovely website too :)
dalvrosa 19 hours ago [-]
Thanks for the feedback <3
erickpintor 19 hours ago [-]
Great post!
Would you mind expanding on the correctness guarantees enforced by the atomic semantics used? Are they ensuring two threads can't push to the same slot nor pop the same value from the ring? These type of atomic coordination usually comes from CAS or atomic increment calls, which I'm not seeing, thus I'm interested in hearing your take on it.
erickpintor 19 hours ago [-]
I see you replied on comment below with:
> note that there are only one consumer and one producer
That clarify things as you don't need multi-thread coordination on reads or writes if assuming single producer and single consumer.
dalvrosa 19 hours ago [-]
Exactly, that's right
dalvrosa 19 hours ago [-]
Thanks! That's not ensured, optimizations are only valid due to the constraints
- One single producer thread
- One single consumer thread
- Fixed buffer capacity
So to answer
> Are they ensuring two threads can't push to the same slot nor pop the same value from the ring?
No need for this usecase :)
loeg 19 hours ago [-]
This is a SPSC queue -- there aren't multiple writers to coordinate, nor readers. It simplifies the design.
nitwit005 15 hours ago [-]
It would be nice to have an example use case where the technique would show a benefit.
It seems relatively rare to have a single producer and consumer thread, and be worth polling a ring buffer.
ohazi 13 hours ago [-]
I use my own very similar version of this spsc lock-free ring buffer on almost every embedded project I work on that has to stream any sort of sampled data (e.g. audio). You can even have the consumer end be a DMA into something like a uart or USB peripheral so your microcontroller userspace doesn't have to touch the hardware.
Blackthorn 20 hours ago [-]
I had what I thought was a pretty good implementation, but I wasn't aware of the cache line bouncing. Looks like I've got some updates to make.
dalvrosa 20 hours ago [-]
Glad that it helps :)
sanufar 20 hours ago [-]
Super fun, def gonna try this on my own time later
dalvrosa 20 hours ago [-]
Feel free to share your findings
brcmthrowaway 15 hours ago [-]
Is there a C library that I can get these data structures for free?
loeg 12 hours ago [-]
ConcurrencyKit ck_ring. The SPSC macros are the most similar to this article:
Random q: What was the first cpu to support atomic instructions?
jeffbee 15 hours ago [-]
I don't know but the IBM 360 and the DEC PDP-10 both had them. Those are the earliest systems I ever saw.
devnotes77 20 hours ago [-]
[dead]
lukesergei 3 days ago [-]
[flagged]
dalvrosa 3 days ago [-]
:)
hedal 3 days ago [-]
[flagged]
dalvrosa 3 days ago [-]
Thanks!
jeffbee 15 hours ago [-]
It's lock-free because it uses ordered loads and stores, which is also how you implement locks. I find the semantic distinction unconvincing. The post is really about how slow the default STL mutex implementation is.
loeg 12 hours ago [-]
There are real practical implications of both the producer and consumer mutating the same cache line to take a lock that is fundamentally avoided by this "lock-free" design. It isn't meaningless.
jeffbee 11 hours ago [-]
That only explains the last stage. In order to steelman the mutex alternative, everything before "further optimization" should have used 2 critical sections. That would give a realistic baseline.
loeg 9 hours ago [-]
I don’t understand what you’re getting at.
kristianp 2 days ago [-]
This is in C++, other languages have different atomic primitives.
smj-edison 20 hours ago [-]
Don't most people use C++11 atomics now? You have SeqCst, Release, Acquire, and Relaxed (with Consume deprecated due to the difficulty of implementing it). You can do loads, stores, and exchanges with each ordering type. Zig, Rust, and C all use the same orderings. I guess Java has its own memory model since it's been around a lot longer, but most people have standardized around C++'s design.
Which is a slight shame since Load-Linked/Store-Conditional is pretty cool, but I guess that's limited to ARM anyways, and now they've added extensions for CAS due to speed.
superxpro12 20 hours ago [-]
I've taken an interest in lock-free queues for ultra-low power embedded... think Cortex-m0, or even avr/pic.
Things get interesting when you're working with a cpu that lacks the ldrex/strem assembly instructions that makes this all work. I think youre only options at that point are disable/enable interrupts. IF anyone has any insights into this constraint I'd love to hear it.
loeg 19 hours ago [-]
For ultra low-power embedded, wouldn't a mutex approach work just fine? You're running on a single core anyway.
dalvrosa 16 hours ago [-]
I'm not sure about the single-core scenario, but would love to learn if someone else wants to add something
In reality multiple threads for single core doesn't make much sense right?
loeg 12 hours ago [-]
> In reality multiple threads for single core doesn't make much sense right?
Not necessarily, I think -- depends what you're doing.
loeg 20 hours ago [-]
LL/SC is still hinted at in the C++11 model with std::atomic<T>::compare_exchange_weak:
Really? Pretty much all atomics i’ve used have load,
store of various integer sizes. I wrote a ring buffer in Go that’s very similar to the final design here using similar atomics.
Nice one, thanks for sharing. Do you wanna share the ring buffer code itself?
wat10000 20 hours ago [-]
They generally map directly to concepts in the CPU architecture. On many architectures, load/store instructions are already guaranteed to be atomic as long as the address is properly aligned, so atomic load/store is just a load/store. Non-relaxed ordering may emit a variant load/store instruction or a separate barrier instruction. Compare-exchange will usually emit a compare and swap, or load-linked/store-conditional sequence. Things like atomic add/subtract often map to single instructions, or might be implemented as a compare-exchange in a loop.
The exact syntax and naming will of course differ, but any language that exposes low-level atomics at all is going to provide a pretty similar set of operations.
dalvrosa 19 hours ago [-]
100% agree +1
jitl 18 hours ago [-]
yeah that’s why i was surprised by grandparent saying the atomics were c++ specific
blacklion 19 hours ago [-]
JVM has almost the same (C++ memory model was modeled after JVM one, with some subtle fixes).
dalvrosa 2 days ago [-]
Yeah, this is quite specific to C++ (at a syntax level)
amluto 20 hours ago [-]
Huh? Other languages that compile to machine code and offer control over struct layout and access to the machine’s atomic will work the same way.
Sure, C++ has a particular way of describing atomics in a cross-platform way, but the actual hardware operations are not specific to the language.
dalvrosa 20 hours ago [-]
Yeah, different languages will have different syntaxes and ways of using atomics
But at the hardware level all are kindof the same
JonChesterfield 20 hours ago [-]
It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
loeg 20 hours ago [-]
> It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Are we reading the same code? The stores are clearly after value accesses.
> Also doesn't have fences on the store
?? It uses acquire/release semantics seemingly correctly. Explicit fences are not required.
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
loeg 19 hours ago [-]
> There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
No, this is incorrect. If you think there's no relationship, you don't understand "release" semantics.
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (see Release-Consume ordering below).
blacklion 19 hours ago [-]
write with release semantic cannot be reordered with any other writes, dependent or not.
Relaxed atomic writes can be reordered in any way.
loeg 19 hours ago [-]
> write with release semantic cannot be reordered with any other writes, dependent or not.
To quibble a little bit: later program-order writes CAN be reordered before release writes. But earlier program-order writes may not be reordered after release writes.
> Relaxed atomic writes can be reordered in any way.
To quibble a little bit: they can't be reordered with other operations on the same variable.
blacklion 18 hours ago [-]
Yep, you are right, more precise, and precision is very important in this topic.
I stand corrected.
hrmtst93837 16 hours ago [-]
That's backwards: in C++, a release store to head_ and an acquire load of that same atomic do order the prior buffer_ write, even though the data and index live in different locations, so the consumer that sees the new head can't legally see an older value for that slot unless something else is racing on it seperately. If this is broken, the bug is elsewhere.
dalvrosa 20 hours ago [-]
Sorry, but that's not actually true. There are no data races, the atomics prevent that (note that there are only one consumer and one producer)
Regarding the style, it follows the "almost always auto" idea from Herb Sutter
secondcoming 20 hours ago [-]
If you enforce that the buffer size is a power of 2 you just use a mask to do the
if (next_head == buffer.size())
next_head = 0;
part
JonChesterfield 20 hours ago [-]
If it's a power of two, you don't need the branch at all. Let the unsigned index wrap.
loeg 19 hours ago [-]
You ultimately need a mask to access the correct slot in the ring. But it's true that you can leave unmasked values in your reader/writer indices.
dalvrosa 19 hours ago [-]
Interesting, I've never heard about anybody using this. Maybe a bit unreadable? But yeah, should work :)
In this post, I walk you step by step through implementing a single-producer single-consumer queue from scratch.
This pattern is widely used to share data between threads in the lowest-latency environments.
I guess the code samples inside post are under https://david.alvarezrosa.com/LICENSE
But feel free to ping me if you need different license, quite open about it
A lot of people focus on the code and then assume the device in question is only there to run it. There's so much you can tweak. I don't always measure it, but last time I saw at least a 20% improvement in Network throughput just by tweaking a few things on the machine.
That specific advice isn't terribly transferable (you might choose to hack up SystemD or some other components instead, maybe even the problem definition itself), but the general idea of measuring and tuning the system running your code is solid.
If you're experiencing NMIs, the solution is simple if you don't care about the consequences; find them and remove them (ideally starting by finding what's generating them and verifying you don't need it). Disable the NMI watchdog, disable the PMU, disable PCIe Error Reporting (probably check dmesg and friends first to ensure your hardware is behaving correctly and fix that if not), disable anything related to NMIs at the BIOS/UEFI/IPMI/BMC layers, register a kernel module to swallow any you missed in your crusade, and patch the do_nmi() implementation with something sane for your use case in your custom kernel (there be dragons here, those NMIs obviously exist for a reason). It's probably easier to start from the ground up adding a minimal set of software for your system to run than to trim it back down, but either option is fine.
Are you experiencing NMIs though? You might want to take a peek at hwlatdetect and check for SMIs or other driver/firmware issues, fixing those as you find them.
It's probably also worth double-checking that you don't have any hard or soft IRQs being scheduled on your "isolated" core, that no RCU housekeeping is happening, etc. Make sure you pre-fault all the memory your software uses, no other core maps memory or changes page tables, power scaling is disabled (at least the deep C-states), you're not running workloads prone to thermal issues (1000W+ in a single chip is a lot of power, and it doesn't take much full-throttle AVX512 to heat it up), you don't have automatic updates of anything (especially not microcode or timekeeping), etc.
Also, generally speaking, your hardware can't actually multiplex most workloads without side effects. Abstractions letting you pretend otherwise are making compromises somewhere. Are devices you don't care about creating interrupts? That's a problem. Are programs you don't care about causing cache flushes? That's a problem. And so on. Strip the system back down to the bare minimum necessary to do whatever it is you want to do.
As to what SystemD is doing in particular? I dunno, probably something with timer updates, microcode updates, configuring thermals and power management some way I don't like, etc. I took the easy route and just installed something sufficiently minimalish and washed my hands of it. We went from major problems to zero problems instantly and never had to worry about DMA latency again.
What else could be improved? Would like to learn :)
Maybe using huge pages?
Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.
Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.
Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.
In prod most trading companies do disable it, not sure about generic benchmarks best practices
It seems that in this case as you get contention the faster end will slow down (as it is consuming what the other end just read) and this will naturally create a small buffer and run at good speeds.
The hard part is probably that sentinel and ensuring that it can be set/cleared atomically. On Rust you can do `Option<T>` to get a sentinel for any type (and it very often doesn't take any space) but I don't think there is an API to atomically set/clear that flag. (Technically I think this is always possible because the sentinel that Option picks will always be small even if the T is very large, but I don't think there is an API for this.)
This was already the case with the cached index design at the end of the article, though. (Which doesn't require extra space or extra atomic stores.)
Would you mind expanding on the correctness guarantees enforced by the atomic semantics used? Are they ensuring two threads can't push to the same slot nor pop the same value from the ring? These type of atomic coordination usually comes from CAS or atomic increment calls, which I'm not seeing, thus I'm interested in hearing your take on it.
> note that there are only one consumer and one producer
That clarify things as you don't need multi-thread coordination on reads or writes if assuming single producer and single consumer.
- One single producer thread
- One single consumer thread
- Fixed buffer capacity
So to answer
> Are they ensuring two threads can't push to the same slot nor pop the same value from the ring?
No need for this usecase :)
It seems relatively rare to have a single producer and consumer thread, and be worth polling a ring buffer.
https://github.com/concurrencykit/ck/blob/master/include/ck_...
Which is a slight shame since Load-Linked/Store-Conditional is pretty cool, but I guess that's limited to ARM anyways, and now they've added extensions for CAS due to speed.
Things get interesting when you're working with a cpu that lacks the ldrex/strem assembly instructions that makes this all work. I think youre only options at that point are disable/enable interrupts. IF anyone has any insights into this constraint I'd love to hear it.
In reality multiple threads for single core doesn't make much sense right?
Not necessarily, I think -- depends what you're doing.
https://en.cppreference.com/w/cpp/atomic/atomic/compare_exch...
https://pkg.go.dev/sync/atomic#Int64
The exact syntax and naming will of course differ, but any language that exposes low-level atomics at all is going to provide a pretty similar set of operations.
Sure, C++ has a particular way of describing atomics in a cross-platform way, but the actual hardware operations are not specific to the language.
But at the hardware level all are kindof the same
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
Are we reading the same code? The stores are clearly after value accesses.
> Also doesn't have fences on the store
?? It uses acquire/release semantics seemingly correctly. Explicit fences are not required.
buffer_[head] = value;
head_.store(next_head, std::memory_order_release);
return true;
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
No, this is incorrect. If you think there's no relationship, you don't understand "release" semantics.
https://en.cppreference.com/w/cpp/atomic/memory_order.html
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store.
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (see Release-Consume ordering below).
Relaxed atomic writes can be reordered in any way.
To quibble a little bit: later program-order writes CAN be reordered before release writes. But earlier program-order writes may not be reordered after release writes.
> Relaxed atomic writes can be reordered in any way.
To quibble a little bit: they can't be reordered with other operations on the same variable.
I stand corrected.
Regarding the style, it follows the "almost always auto" idea from Herb Sutter
https://github.com/concurrencykit/ck/blob/master/include/ck_...
It's mentioned in the post, but worth reiterating!