Previous Next Deep Futility PruningStand-Pat CutoffFutility pruning is based on the fact that at some point in the search a player can cause a beta cutoff by simply doing nothing ('standing pat'). In that case the current evaluation is taken as the score. It is as if the side to move one-sidedly declares the node as being a quiet one (so that the evaluation actually means anything). Usually standing pat is only allowed in the Quiescence Search, which by definition we will assign search depth d=0. |
Note that not all QS algorithms allow such a one-sided declaration of quiescence. Many engines do not allow standing pat while in check, for example. The assumption implicit in allowing stand pat in any position is that having the move is a sure cure against any possible threat. In the vast majority of positions, this is indeed the case. Valuable pieces such as Queen are so mobile that they can easily evade threats by withdrawing to a safe square. Pieces with low mobility, such as Pawns, can usually be defended easily by other pieces, since they are not very valuable.
Notable exceptions are forks, pinnings and trapped pieces. The latter two represent threats against pieces of impaired mobility, so that the correlation between value and ease of escape is broken. Checks also belong in this category, as the high value of the King is derived from the rules for checkmate, and not from its intrinsic power as a piece. The King can thus be considered a piece of permanently impaired mobility, and checks can be waived away with less impunity than other threats.
Worrying about all threats is counter-productive: almost always you would confirm the obvious, namely that the threatened piece can be saved. The effort invested in screening every move if it is a threat, and every threat if it can be solved, is thus mostly wasted. And wasted time reduces your average search depth, defeating the purpose of finding more with less effort. It would be nice if there were easy ways to recognize severe threats by other means than randomly searching for solutions (and not being able to find one), but even then it might waste a lot of time to subject all moves to such a test, only to find out that most do not pose a severe threat. We will therefore suppose in the following analysis that threats in QS are completely ignored, and standing pat is always allowed.
That we allow stand pat even when in check, without even testing if it is checkmate, might seem strange. As we will see, this appoach will allow us to prune many nodes, with depths upto 6, in a theoretically sound way. Not allowing stand-pat in all occasions would severely limit us in this ability: many prunings would only be sound if at the same time we could prove that we cannot give a check in two or three moves. In practice, this is not feasible, and we would lose the pruning. So by not extending checks in leave nodes, we are able to prune other branches, in particular those that don't give check at d=2 or d=3 much more aggressively. The resulting tree still conforms to our intuition that checks should somehow be favored in terms of search depth. The approach is to bring them in view by increasing average search depth by increasing sound pruning, rather than extending the checks themselves.
At d=0 the side to move will stand pat if Eval>=Beta
,
without the need to generate or consider any moves.
If we can foresee this in the parent node,
we would not have to search the move leading this QS node at all, for it will certainly fail low.
Unfortunately, in the parent node the evaluation after the move is not yet available.
We assume that an exact evaluation can only be obtained after actually making the move on the board,
in the QS node itself.
Doing the evaluation is in general much more work than making the call to Qsearch anyway,
so there is little to be gained by pruning in the parent if the decision to prune
requires a full evaluation.
In the parent we can still prune based on an estimate of the evaluation after the move.
The material component of the new evaluation is trivial to obtain as the piece value of the victim,
and for the positional components that are difficult to evaluate we can use an upper bound
for the difference with the current evaluation as a safety margin.
We than have an estimate NewLazyEval = (CurEval + PVal[Victim] + PosMargin)
that is trivial to compute,
and gives an upper bound to the evaluation after the move.
If NewLazyEval<=Alpha
(at d<=1
) we can ignore the move,
as it will certainly be refuted by a stand pat.
Provided PosMargin is truly an upper bound to the positional component of the evaluation change,
this pruning is theoretically sound,
i.e. we will never miss anything by applying it.
The penalty for including the positional margin is that we cannot prune quite as often:
some cases that are a close call slip through, and have to be judged by the full evaluation.
But since applying the criterion is extremely cheap, anything that can be pruned
(saving us the work of a Make / UnMake plus Evaluation)
is a gain.
Futility pruning does even more than that. Note that the condition to prune a move ("specific futility pruning") can be re-written as:
PVal[Victim] <= Alpha - CurEval - PosMargin = FutilityLimit
.
If the captures are generated or sorted in order of descending victim value (MVV ordering), the first move we encounter that can be pruned, can automatically lead to discarding all remaining moves as well. We then have an alpha cutoff, very similar to the usual beta cutoff. We thus don't have to subject every move to the test. In particular, if FutilityLimit >= 0, all non-captures become futile. This is usually the bulk (~90%) of the moves, and in many move-generation schemes we could save appreciable time by only generating captures, in stead of all moves, whenever FutilityLimit becomes positive.
If we are aware of the value of the highest piece the opponent has on the board, (e.g. from the piece list, or in a separately kept material-composition variable), it can arise that we already know that all moves will be futile even without generating any moves. Such "general futility" occurs if
PVal[FirstPiece(Opponent)] <= FutilityLimit
.
In this case we can take the alpha cutoff before move generation, as no move can possibly capture what isn't there. This of course only applies to nodes subject to specific futility pruning, i.e. at d = 0 and d = 1.
The general futility is very similar to the stand-pat cutoff. It is in fact its fail-low counterpart. It therefore also has similar consequences in the parent node: the move leading to the position experiencing general futility will be a guaranteed cut-move, where the move leading to a stand-pat cutoff is futile. The condition for general futility is
CurEval <= Alpha - PVal[FirstPiece(Opponent)] - PosMargin
.
From the parent node this reads as
NewEval >= Beta + PVal[FirstPiece(SideToMove)] + PosMargin
.
The evaluation after the move can be estimated from the current evaluation similar as with futility pruning, except that in this case we should use a lower bound to the positional component of the move (PosMargin2, usually a negative quantity). This finally gives the condition for 'over-kill' or 'super-cutoff' as
PVal[Victim] >= Beta - CurEval + PVal[FirstPiece(SideToMove)] + PosMargin - PosMargin2 = OverKillLimit
,
At d <= 2
we can take such a super-cutoff on a move with a high victim without actually searching the move.
Again, if the moves are MVV sorted, the first move will cause such a super-cutoff,
if it will occur at all.
What applies to any move, also applies to the null move (which has of course no victim). Thus, if CurEval is so much above beta that OverKillLimit < 0, (not even the loss of our highest piece would make a dent) we can null-move prune without actually doing a null-move search (where the reply would be searched at d = 0 or d = 1), since we already know that this search must fail high. We can thus take the super-cutoff based on evaluation, present material and positional margins only. (Note that these margins might also be variables, depending on game stage).
Unlike the super-cutoff on ordinary moves, which can only be applied in nodes up to d = 2, such a cutoff on the null move can be taken upto d = 2+R, as up to this depth the reply would be searched with d<=1. The super stand-pat can thus be applied to quite large depth! The exact condition reads
CurEval >= Beta + PVal[FirstPiece(SideToMove)] + PosMargin
,
as the null move also makes no positional contribution to the evaluation (so that PosMargin2 disappears). Like the normal stand pat, this one also leads to a futility consideration in the parent: it is useless to consider moves that do not raise the evaluation enough to prevent super stand pat. In the parent the negamaxed condition reads
NewEval <= Alpha - PVal[FirstPiece(Opponent)] - PosMargin
.
With the usual lazy estimate of the evaluation after the move this becomes
PVal[Victim] <= Alpha - CurEval - PVal[FirstPiece(Opponent)] - 2*PosMargin = DeepFutilityLimit
.
In principle the FirstPiece() should be evaluated after the move, as the highest remaining piece. Also, the 2*PosMargin for the positional value might be an over-estimate for the positional gain of 2 moves, if large terms contribute to this margin that can only be scored once (e.g. a castling bonus). This is the condition for specific deep futility pruning, decided on a move by move basis.
There is one caveat, though: in Chess, the highest piece is the King. It is always present, and has infinite value. Substituting infinity in the formulas makes them meaningless: when a King is involved, nothing can ever be futile! With strictly legal move generation, however, we can trust that we cannot capture the opponent's King in the current position. But we might be able to capture it on our next move if we check it now (i.e. a check now might turn out to be a checkmate on further search, while further search is requested). So at d>=2, checks are never futile. Realizing that, the FirstPiece() can apply to all pieces except King.
Again, from the opponent's piece composition, we might be able to see at a glance that all our moves will be futile at this depth. This happens if capturing his two highest pieces plus margins, won't make it to alpha. When this happens, a general deep futility occurs similar to the ordinary general futility at d = 1. Just like there, we would not even have to generate moves, except for the checks. But we might have a fast dedicated move generator for that, so that it still saves us a lot of time. The condition for general deep futility is
CurEval <= Alpha - PVal[FirstPiece(Opponent) - PVal[SecondPiece(Opponent) - 2*PosMargin
.
Note that testing for general deep futility solves the problem of taking into account the fact that your capture might change the piece make-up while testing a move for specific deep futility. We can simply set the futility limit using the highest piece of the opponent. In case where we get to testing individual moves, this will be below the value of the next-highest piece, as the absence of general deep futility guarantees that the two together are worth less than we need. Moves that capture the highest piece (and thus might have required an adjustment of the futility limit) will thus certainly not be considered futile. This is correct, as cases where capture of the highest piece is only non-futile if we double-count it, will be recognized as general futility.
Because of the need to search checks it is not possible to completely prune the node experiencing general deep futility already in the parent, similar to the super-cutoff for ordinary general futility. It would be too difficult to foresee if checks are possible after a certain move, without first making it. It is possible, though, to first test for general deep futility based on a lazy estimate of eval in the futile node itself, saving us the trouble of doing a full eval, and just start generating the checks. (There might be none, and then the pruning will make this a leaf node anyway, and we would not even have to do differential evaluation terms for the benefit of the sub-tree.)
The total value of the two highest pieces might be so large that there aren't many nodes in the tree where we are this much out of window. This limits the usefulness of deep futility pruning. This is especially severe if there are Queens present, as these have the value of 2-3 pieces already by themselves. But if one is willing to sacrifice some theoretical soundness, the following still seems very safe:
The limit for general futility at 1<d<=3+R
, i.e. with two moves to go,
(the value of the opponent's two best pieces) can easily be quite large.
Suppose he has Q+R+N.
General futility only occurs if we (eval-wise) are more than Q+R behind (= below alpha),
because if not, we might get even by capturing Q+R in our remaining two moves to the opponent's stand pat.
But if we can't capture his Queen immediately, we are very unlikely to be able to capture it in our second move: the opponent is not limited to a null move in his reply, even if we attack the Q with our first move, or leave it under attack, he will use his second move to save that Queen. This he will be able to do almost certainly, since a Queen is very resourceful. Except when he is in check at the same time, but we already decided that checks were never futile. So these will be searched anyway, regardless if they harrass the Queen or not. From the non-checks, only cases where the Queen is pinned on the King spring to mind as unsolvable, and they are easily discovered as failed checking attempts (where the Queen turned out to block the ray between prospective checker and King).
So if there is no general deep futility and the opponent has a Queen, we do have to generate moves to try all Queen captures (a SEE of the opponent's Queen square might substitute, though). But otherwise can declare a kind of weak general deep futility based on the sum of the two highest opponent's pieces except Queen. (So R+N in the example). This would save us from wasting time on any captures of these lesser pieces, in the false hope that the Queen will fall into our lap at our second move. The condition is:
CurEval <= Alpha - PVal[SecondPiece(Opponent) - PVal[ThirdPiece(Opponent) - 2*PosMargin + PosMargin2
.
If even this weak general futility condition is not met, we have to set the limit for specific futility, (which we will then apply to all moves after capture generation), as the futility gap minus the opponent's second piece, rather than his Queen. So in the example, if the futility gap was 7 Pawns, Q+R could do it, (so no general deep futility), and R+N would do it (so the gap is not large enough to declare weak general deep futility, and consider only Queen captures next to the checks). But the futility limit for other moves can be set at the capture of 7-5=2 Pawns, i.e. captures of at least a Piece would qualify, and we can hope for the Rook later (while the opponent might be busy saving his Queen, you never know...).
The remaining moves would be futile at d=2 or d=3. Unfortunately not upto d=3+R, as the guarantee that the opponent's null move will fail high exists only for true general deep futility. In the weak-futility case, we cannot exclude that the null move will fail low because his Queen is under attack, so that he will have to play another move to limit the damage to R+N.
In principle, similar reasoning could be applied to other situations where there is a single highest piece. But even for a Rook it is much less certain that it could always move to safety than for a Queen. Furthermore, a Rook is not worth so spectacularly much more than other pieces as a Queen, so excluding a Rook while there are still other minor Pieces doesn't cover many extra cases. That, plus the fact that there are usually two Rooks (so that unsolvable double threats against both of them might still lead to forced loss of one of them on the second move) makes a good case for reserving this kind of reasoning for a Queen only.
The following pseudo-code indicates how deep futility can be used to prune moves and save time on move generations.
/* Normal futility */ FutilityGap = Alpha-Eval-PosMargin; if(d<=1 && FutilityGap > 0) { if(PieceValue[FirstPiece[xcolor]] < FutilityGap) FAIL_LOW; GENERATE_CAPTURES; FutilityLimit = FutilityGap; DISCARD MOVES WITH PVal[Victim] < FutilityLimit; } else if(d==0) GENERATE_CAPTURES; else #ifndef DEEPFUTILITY GENERATE_ALL_MOVES; #else /* deep futility */ { FutilityGap = Alpha-Eval-DeepMargin; if(d<=3+R && PVal[FirstPiece[xcolor]]+PVal[SecondPiece[xcolor]] < FutilityGap) GENERATE_CHECKS; else if(d<=3 && FutilityGap > 0) { GENERATE_CHECKS(incl. Q pins); if(FirstPiece[xcolor]==QUEEN) { if(PVal[SecondPiece[xcolor]]+PVal[ThirdPiece[xcolor]] < FutilityGap) { GENERATE_QUEEN_CAPTURES; } else { FutilityLimit = FutilityGap - PVal[SecondPiece[xcolor]]; if(FutilityLimit > 0) { GENERATE_CAPTURES; DISCARD MOVES WITH PVal[Victim] < FutilityLimit && not_check; } else GENERATE_ALL_MOVES; } } else { FutilityLimit=FutilityGap - PVal[FirstPiece[xcolor]]; if(FutilityLimit > 0) { GENERATE_CAPTURES; DISCARD MOVES WITH PVal[Victim] < FutilityLimit && not_check; } else GENERATE_ALL_MOVES; } } else GENERATE_ALL_MOVES; } /* In practice discarding the moves will be done in the Make/UnMake loop as an alpha cutoff */ #endif
Previous Next