Class AdjustedSum

All Implemented Interfaces:
Heuristic, PlanningGraphHeuristic, StateHeuristic, Serializable

public final class AdjustedSum extends RelaxedGraphHeuristic
This class implement the adjusted sum heuristic. This heuristic improves the sum heuristic by accounting for both negative and positive interactions among propositions. Since fully accounting for either type of interaction alone can be as hard as the planning problem itself, we circumvent this difficulty by using a phased approach. Specifically, we ignore one type of subgoal interaction in order to account for the other, and then combine them both together.

Let cost(p) denote the cost of achieving a proposition according to the sum heuristic. Note that it is simple to embed the sum heuristic value into the planning graph. We maintain a cost value for each new proposition. We are now interested in estimating the cost cost(S) for achieving a set S = {p1, p2, ..., pn}. Suppose lev(p1) <= lev(p2) <= ... <= lev(pn) where lev(p) is the level where the proposition p appears for the first time the planning graph. Under the assumption that all propositions are independent, we have

 code(S) := cost(S - p1) + cost(p1)
 

Since lev(p1) <= lev(S - p1), proposition is possibly achieved before the set S - p1. Now, we assume that there are no positive interactions among the propositions, but there may be negative interactions. Therefore, upon achieving S - p1 subgoal p1 may have been deleted and needs to be achieved again. This information can be extracted from the planning graph as follows. According to the planning graph, set S - p1 and S are possibly achieved at level lev(S - p1) and level lev(S), respectively. If lev(S - p1) \= lev(S) that means there may be some interaction between achieving S - p1 and achieving p1 (because the planning graph has to expand up to lev(S) to achieve both S - p1 and p1). To take this negative interaction into account, we assign:

 cost(S) := cost(S - p1) + cost(p1) + (lev(S) - lev(S - p1))
 
Applying this formula to S - p1, S - p1 - p2 and so one, we derive:
 cost(S) := sum(cost(pi) + lev(S) - lev(pn) for all pi in S.
 
Note that lev(pn) = max(lev(pi)) for all pi in S as per our setup. Thus the estimate above is composed of the sum heuristic function hsum = sum(cost(pi)) for all pi in S and an additional cost lev(S) := max(level(pi)) for all pi in S. This difference is call the interaction degree among proposition in set S.

Definition (Interaction degree). The interaction degree among propositions in a set

S is
 delta(S) := lev(S) - max(lev(p)) for all p in S.
 

It should be noted that the notion of binary interaction degree is only a special case of the above definition for a set of two propositions. In addition, when there is no negative interaction among subgoals, delta(S) = 0, as expected. We have the following heuristic:

 hadjsum(S) := sum(cost(pi)) + delta(S) or all pi in S.
 
Warning: The adjusted sum heuristic is not admissible.
See Also:
Sum, Max, Serialized Form
  • Constructor Details

    • AdjustedSum

      public AdjustedSum(Problem problem)
      Creates a new AdjustedSum heuristic for a specified planning problem.
      Parameters:
      problem - the planning problem.
      Throws:
      NullPointerException - if problem == null.
  • Method Details

    • estimate

      public int estimate(State state, Condition goal)
      Return the estimated distance to the goal to reach the specified state. If the return value is Integer.MAX_VALUE, it means that the goal is unreachable from the specified state.
      Parameters:
      state - the state from which the distance to the goal must be estimated.
      goal - the goal expression.
      Returns:
      the distance to the goal state from the specified state.
    • estimate

      public double estimate(Node node, Condition goal)
      Return the estimated distance to the goal to reach the specified state. If the return value is DOUBLE.MAX_VALUE, it means that the goal is unreachable from the specified state.
      Parameters:
      node - the state from which the distance to the goal must be estimated.
      goal - the goal expression.
      Returns:
      the distance to the goal state from the specified state.