The idea and perfection of the Warshall algorithm
The Warshall-Floyd algorithm is based essentially on the idea: to use a link between a problem and its simpler, not a lesser, version. Warshall and Floyd published their algorithms without mentioning dynamic programming. However, algorithms certainly have a dynamic programming flavor and have come to be seen as an application of this technique.
ALGORITHM Warshall(A[1..n, 1..n])
//ImplementsWarshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ←A
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
R(k)[i, j ]←R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j])
return R(n)
We can speed up the above implementation of Warshalls algorithm for some inputs by restructuring its innermost loop
My question above the text is as follows
-
What does the author mean by the idea: "to use the link between the problem and its simpler, not a smaller version." Please elobaorate.
-
How can we improve the speed like the author mentioned in the above implementation.
The formula from 1. means that the shortest path problem (which can be considered as a generalization of the transitive closure problem) has an optimal substructure property; however, there is no formal description for this property (in the sense of a mathematical definition). An optimal substructure property is necessary for the problem to be dynamically programmable.