Search This Blog

Tuesday, May 09, 2006

Rebuttal

So, I sent a link to the benchmark to my friend in Blizzard. Surprisingly, it provoked a rather lengthy rebuttal from the "lead of our technology team" (who my friend forwarded my report to). I'm not sure if I'm allowed to post the original e-mail, so I'll just respond to the points in it.

It's true that it's an artificial benchmark. I was looking for a simple and elegant method that would have some resemblance to real-world data. I actually did consider writing a program to record every block of memory something like Warcraft 3 allocates, and then use that as input for my tests. But ultimately laziness got the better of me. Maybe sometime I'll try that.

I honestly hadn't considered the possibility that a memory allocator would be optimized for free time, rather than allocation time, as I expected free to take an almost negligible amount of time (this was why I only benchmarked allocations, because I expected that would be the bottleneck). I might have to do some free benchmarks as well, then.

I'm not quite certain what he meant by SMem being intended for use with multiple heaps (I can think of two possible meanings). I'm aware that SMem also supports the creation of private heaps, although I hadn't seen that used a great deal in my reverse-engineering (though my work is getting pretty dated; I'm not aware what techniques they used currently). If he's referring to slab allocation, that's great; that's the way to optimize allocation performance (which is why I added it to LibQ). This would, of course, produce drastically different results than the benchmark did; of course, it would also produce different results for HeapAlloc and Hoard (at least, if Hoard can do multiple heaps).

The other possibility is that he's referring to the global Storm heaps. For global allocations, Storm uses a set of 256 different heaps, each protected by a separate mutex. Which heap is used is based on the hash of a tag string for each allocation (typically this is either __FILE__ or the decorated name of the structure being allocated). I hadn't really thought that this was to promote similarity of data in each heap; I was thinking it was more to ensure that any two threads would virtually never lock the same heap at once. But if that's the case, it is conceivable that that could influence the results, as there might not be an even distribution of sizes, given the huge number of heaps.

The average block size was 256 bytes. The allocations were randomly chosen from 1 to 512 bytes, with equal probability. While equal probability may not be realistic, it might have been a good thing, given that I couldn't have imagined the bizarre shape of the SMemAlloc "curve" on the graph :P The reason for using numbers this small was based on the assumption that larger allocations would be fairly infrequent, so not as important, performance-wise (and very large blocks getting allocated from the OS). I was actually worried that I might skew the data in unrealistic ways if I made the maximum size any larger.

This was exclusively a single-threaded benchmark by design. There were two reasons for this, one being obvious: I don't have a multiprocessor system :P The other was that multiprocessor benchmarks are fairly common (i.e. on the Hoard site), and so didn't really justify a benchmark (remember that finding problems with Storm wasn't the original goal of the benchmark). That and the fact that multithreaded performance is relatively predictable: Vanilla HeapAlloc is going to do terribly (due to having a single heap mutex), Storm is going to be very fast (due to more or less never having allocation collisions). In fact, the allocator I'm working on is highly scalable (completely lock-free, and actually somewhat resembles the Windows low-fragmentation heap), but I also expected it to be faster than typical allocators for individual allocations (hence the survey of the competition).

Lastly, speaking of the Y axis of the graph, you might find this graph interesting, in which I uncapped the Y axis (I had manually capped it to give the best view of the data) :P

2 comments:

Anonymous said...

Hmm I love the idea behind this website, very unique.
»

Anonymous said...

Very pretty site! Keep working. thnx!
»