Looking for something else in the code, I happened to notice that temporary indexes in jSuneido were using a merge tree. Was that really the best data structure to use? Presumably it was better than what I used prior to that, but it didn't seem optimal.
A merge tree (at least my flavor) uses levels that double in size. Some levels may be empty. Each level is kept independently sorted. Items are inserted at the root and when the first levels fill up, they are merged into the next bigger level.
Merge trees have a lot of good features:
But they keep the data ordered all the time. Which is necessary if you are interleaving reads and writes, but for my usage, I insert all the data before reading any of it. And in order to keep the data sorted, they do quite a bit of extra work. (e.g. a million items = 20 levels, which means on average each item is copied/merged about 19 times) And merge trees are write optimized, they're not as good at lookups or iteration.
Maybe a simple list with an explicit sort step would be better in this situation? But the usual array doubling list does a lot of allocation and copying and wastes 25% of the space on average. And allocating large contiguous chunks of memory can be hard on memory management. It seemed like a better alternative would be a list of fixed size blocks. (In the diagram the blocks are only two elements, in practice 4096 elements per block was a good size.) Iteration is simple, and lookups can do a simple binary search.
This has most of the features of a merge tree - fast insertion, cache oblivious/friendly, and low space overhead. In addition, it does less copying, never allocates huge blocks, and has faster iteration and lookups. And to top it off, the code is simpler :-)
As usual, I debated whether to go off on this tangent. But I was in between tasks, and I figured it wouldn't take long to implement enough to benchmark and decide whether to proceed.
To sort my block list I decided to use a regular sort on each block, and then merge the blocks. By using a series of two way merges and recycling blocks I could limit the extra space needed for merging to two blocks.
Although the percentage space overhead is high for small lists, it's small in absolute terms. And for small lists we have a single block and a simple sort with no merging required (and therefore no extra temporary space for merging).
I implemented this in jSuneido since that's what we're using in production. But Java doesn't have an integer array sort with a custom comparator. (There are ways to do it, but not efficiently.) I ended up borrowing the sort from fastutils (which appears to have similar origins to the Java sort). I've been mostly programming in Go lately so it was a bit of a shift to be back in Java.
My initial intuition proved correct and overall it was about twice as fast. Of course, this is just one small part of a large system so I'm not sure how much speedup we'll see in actual use. Still, I figure it was probably worth the day and a half it took me.
A merge tree (at least my flavor) uses levels that double in size. Some levels may be empty. Each level is kept independently sorted. Items are inserted at the root and when the first levels fill up, they are merged into the next bigger level.
Merge Tree |
- Fast (amortized) insertion
- Cache oblivious (i.e. don't need to be tuned for specific hardware)
- Cache friendly - mostly sequential access
- Optimal memory space (e.g. as opposed to array size doubling)
But they keep the data ordered all the time. Which is necessary if you are interleaving reads and writes, but for my usage, I insert all the data before reading any of it. And in order to keep the data sorted, they do quite a bit of extra work. (e.g. a million items = 20 levels, which means on average each item is copied/merged about 19 times) And merge trees are write optimized, they're not as good at lookups or iteration.
Maybe a simple list with an explicit sort step would be better in this situation? But the usual array doubling list does a lot of allocation and copying and wastes 25% of the space on average. And allocating large contiguous chunks of memory can be hard on memory management. It seemed like a better alternative would be a list of fixed size blocks. (In the diagram the blocks are only two elements, in practice 4096 elements per block was a good size.) Iteration is simple, and lookups can do a simple binary search.
Block List (after sort) |
As usual, I debated whether to go off on this tangent. But I was in between tasks, and I figured it wouldn't take long to implement enough to benchmark and decide whether to proceed.
To sort my block list I decided to use a regular sort on each block, and then merge the blocks. By using a series of two way merges and recycling blocks I could limit the extra space needed for merging to two blocks.
Although the percentage space overhead is high for small lists, it's small in absolute terms. And for small lists we have a single block and a simple sort with no merging required (and therefore no extra temporary space for merging).
I implemented this in jSuneido since that's what we're using in production. But Java doesn't have an integer array sort with a custom comparator. (There are ways to do it, but not efficiently.) I ended up borrowing the sort from fastutils (which appears to have similar origins to the Java sort). I've been mostly programming in Go lately so it was a bit of a shift to be back in Java.
My initial intuition proved correct and overall it was about twice as fast. Of course, this is just one small part of a large system so I'm not sure how much speedup we'll see in actual use. Still, I figure it was probably worth the day and a half it took me.