Leapfrog: Piggybacking

Piggybacking Lost Lists on the Won Bitmaps

Saving Seek Operations

As a refinement that helps speeding up generating very lengthy end-games, where for very many generations the lost(N) sets are very small, we piggyback them on the won bitmaps. This way, no separate seeks are needed to read and write the lost(N) info. Drawback is that we cannot select the specific lost(N) we are interested in, but have to read the info for all N upto the one we are interested in. This takes extra transfer time. So we only use it for lost(N) sets that are really small. A highly populated lost(N) will take about 1MB per chunk (10% of the positions, represented by 5 bits), but for cycles that uncover only 0.1%-0.01% of lost positions, represented by 1.5-2 bytes on average, this drops to 24KB-3.2KB. So we could store 8-60 cycles of lost(N) info in the spacers. Even when this is not enough to store all cycles of very low density, it can still cause substantial savings: as soon as the area starts overflowing, we can copy all lost(N) info in there (which was read anyway) to the regular lost(N) area of the tablebase, to free the piggyback area for the next so may cycles. The piggyback area is then used to accumulate a number of cycles at very small cost.

In fact the 200KB of lost(N) info piggybacked onto a 2MB won bitmap can be located as 100KB on either side of the bitmap. This can save some extra transfer time by not reading part of the 'outer' spacer areas, moving the data in there around such that the most recent lost(N) lists (the only ones that are needed) are always immediately bordering the won bitmap.

For a 'sparse cycle', the required amount of disk I/O for one leap-frog cycle is thus 518 x 32 reads and writes (as there are 32 pairs of white-King positions), of slightly more than 4.2MB. At a transfer rate of 100 MB/sec and 7ms seek time, this will take 49ms per read or write per chunk-pair. For a pawnless 7-men that would amount to 1624 sec of disk I/O per cycle.