NonBlocking HashTable Source Code

I am please to announce, after loooong delay, the source code to my NonBlocking Hash Table is available as open-source on SourceForge:

http://sourceforge.net/projects/high-scale-lib

I’ll be adding to this library over time when I’m not so busy!  Right now I’m porting Java6 to Azul, reviewing OOPSLA papers (19 papers, about 15 pages each, mostly thick academic stuff), and making JavaOne slides.  In fact, I’ll be taking about the NonBlocking Hash Table at JavaOne (slides) this year, along with a couple of other talks.

Here’s an interesting (to me) discussion about tiered compilation in HotSpot.  Interesting to me because at Azul we’ve basically forked from the main HotSpot source base on this issue.  Our version of tiered compilation is alive and well, with a tidy performance win pretty much across the board.  Of course, we don’t inline true virtual calls (i.e., megamorphic calls or calls which actually target more than one method at runtime – as opposed to those that can be statically predicted), because our hardware lets us do such calls fairly cheaply.  Inlining the megamorphic call “nails down” the decision to do the Java-level call via the most expensive mechanism (albeit with compiler aided scheduling, which will help Sparc & X86 but not Azul), and nails it down at “server” compile time.

Since Azul’s tiered compilation is not nailing down the decision to do such calls “the hard way”, if it turns out the call site is really monomorphic we get to do the call via an inline-cache, i.e., really cheap.

Cliff

PS: I struck out on Wikipedia today, failing to find entries for: megamorphic calls, inline caches, tiered JIT compilation (and several variations on that theme), as well as entries for IBM’s J9 JVM (which I know has tiered compilation).  How many readers of this blog know what an inline-cache is?  (hint: it’s a way to make >95% of virtual Java calls go nearly as fast as plain static calls).