Do not follow this link

~mht/cmr

32abf2d5 — Martin Hafskjold Thoresen 4 years ago master
Add license
f119b37b — Martin Hafskjold Thoresen 6 years ago
Rewrite of the README.

Should add perf plots right in here, and fill out some of the sections.
f7a9c8fc — Martin Hafskjold Thoresen 6 years ago
Remove wrong data from lngrad 80 raw
dc00ad67 — Martin Hafskjold Thoresen 6 years ago
Correct naming of dasquad benches
44f8d0f3 — Martin Hafskjold Thoresen 6 years ago
dasquad graphs as well
108fa188 — Martin Thoresen 6 years ago
Prepend raw files with hostname
c3640b46 — Martin Hafskjold Thoresen 6 years ago
Benchmarking stuff.

Just check in raw data, generated PDfs, scripts, everything >:)
145f4bd8 — Martin Hafskjold Thoresen 6 years ago
I should probably start double check git before pushing
bd1c5beb — Martin Hafskjold Thoresen 6 years ago
Add README.md for the data stuff
629f6cb4 — Martin Hafskjold Thoresen 6 years ago
Add the generated graph as well
05920ea3 — Martin Hafskjold Thoresen 6 years ago
Add plot generating scripts and some data
24763305 — Martin Hafskjold Thoresen 6 years ago
It turns out that `free` also does internal locking

Who knew :)
ca4e4182 — Martin Hafskjold Thoresen 6 years ago
Fix the alloc lock. Closes #10

It's crazy that this actually ever worked: ever since we added the padding to
the thread bool signaling if the thread is allocating the lock have been broken:
instead of having threads write to the bool, they would write to the first byte
in the padding instead of the actual `AtomicBool` in the padded struct.
The locking would look, correctly, at the `AtomicBool`, which now would always
be `false`.
a964674a — Martin Hafskjold Thoresen 6 years ago
Upgrade compiler version
7c61834a — Martin Hafskjold Thoresen 6 years ago
Write comments.
5b585576 — Martin Hafskjold Thoresen 6 years ago
Add fences around the allocator wrappers

I don't see why this should be necessary, but threads get stuck deep inside
jemalloc while they are being signaled, which is exactly what the alloc wrappers
and lock things are supposed to prevent.

The dissappearence of the deadlock isn't 100% confirmed yet, but I'm commitin in
order to run it on the server (where it was more prevalent than on my laptop).
7d2b8249 — Martin Hafskjold Thoresen 6 years ago
Set the THRESHOLD value to something more towards the average wrt perf.
25287db8 — Martin Hafskjold Thoresen 6 years ago
Fix the bug!

The bug was that in `remove` we gave it to crossbeam before really removing it
from the list. This was fine in CMR, since we did reachability from the head.

(nodes could be marked as deleted in the list, but not really removed)
37c55e55 — Martin Hafskjold Thoresen 6 years ago
Add some comments and debug stuff

We're still looping somehow. With either insert or delete in the 80/10/10
benchmark we're fine, so there's probably some interleaving of those two
operations that contains the bug. In addition, we stopped looping after the
`defer_unchecked` line was removed from `delete`, which suggests that some
address is reused, and which causes the bug. But this would mean that we are
reading freed memory, which makes it weird that the looping is so consistent.

Maybe try to work backwards: there is a loop. How would a loop be formed in
`delete` or `insert`. Which invariants do we have in the list? Etc.
03d1605b — Martin Hafskjold Thoresen 6 years ago
Port over list tests from data-structures to crossbeam-skiporder

Tests are running fine? We're still looping in the 80/10/10 benchmark
using cbo::hashmap
Next
Do not follow this link