Expected efficiency gain of alpha-beta trim optimization: tt, MTD (f)?

What is the expected performance gain when adding transposition tables and then MTD (f) to a pure alpha beta trim for chess?

In my clean alpha beta trim, I use the following move ordering:

  • main variation (from previous iteration of iterative deepening)
  • best to move from transpose table (if any)
  • captures: highest grip, then lowest grip
  • The killer moves
  • history of heuristics

With the above setting, I get the following average results for a depth of 9:

  • pure alpha beta: X nodes visited
  • alpha beta + TT: 0.5 * X nodes visited
  • MTDF (f): 0.25 * X nodes visited

I was expecting better results and I wanted to make sure this is as good as it can be, or if there is something wrong with my implementation?

I'm looking to the nearest 0.1 pawn.

+3


source to share





All Articles