Class AdjustedSum
- All Implemented Interfaces:
Heuristic
,PlanningGraphHeuristic
,StateHeuristic
,Serializable
Let cost(p)
denote the cost of achieving a proposition according to the sum
heuristic. Note that it is simple to embed the sum heuristic value into the planning graph. We
maintain a cost value for each new proposition. We are now interested in estimating the cost
cost(S)
for achieving a set S = {p1, p2, ..., pn}
. Suppose
lev(p1) <= lev(p2) <= ... <= lev(pn)
where lev(p)
is the level
where the proposition p
appears for the first time the planning graph. Under the
assumption that all propositions are independent, we have
code(S) := cost(S - p1) + cost(p1)
Since lev(p1) <= lev(S - p1)
, proposition is possibly achieved before the set
S - p1
. Now, we assume that there are no positive interactions among the
propositions, but there may be negative interactions. Therefore, upon achieving
S - p1
subgoal p1
may have been deleted and needs to be achieved
again. This information can be extracted from the planning graph as follows. According to the
planning graph, set S - p1
and S
are possibly achieved at level
lev(S - p1)
and level lev(S)
, respectively. If
lev(S - p1) \= lev(S)
that means there may be some interaction between achieving
S - p1
and achieving p1
(because the planning graph has to expand
up to lev(S)
to achieve both S - p1
and p1
). To
take this negative interaction into account, we assign:
cost(S) := cost(S - p1) + cost(p1) + (lev(S) - lev(S - p1))Applying this formula to
S - p1, S - p1 - p2
and so one, we derive:
cost(S) := sum(cost(pi) + lev(S) - lev(pn) for all pi in S.Note that
lev(pn) = max(lev(pi))
for all pi
in S
as
per our setup. Thus the estimate above is composed of the sum heuristic function
hsum = sum(cost(pi))
for all pi
in S
and an
additional cost lev(S) := max(level(pi))
for all pi
in
S
. This difference is call the interaction degree among proposition in set
S
.
Definition (Interaction degree). The interaction degree among propositions in a set
S
is
delta(S) := lev(S) - max(lev(p)) for all p in S.
It should be noted that the notion of binary interaction degree is only a special case of the
above definition for a set of two propositions. In addition, when there is no negative
interaction among subgoals, delta(S) = 0
, as expected. We have the following
heuristic:
hadjsum(S) := sum(cost(pi)) + delta(S) or all pi in S.Warning: The adjusted sum heuristic is not admissible.
- See Also:
Sum
,Max
, Serialized Form
-
Nested Class Summary
Nested classes/interfaces inherited from interface fr.uga.pddl4j.heuristics.state.StateHeuristic
StateHeuristic.Name
-
Constructor Summary
ConstructorDescriptionAdjustedSum(Problem problem)
Creates a newAdjustedSum
heuristic for a specified planning problem. -
Method Summary
Methods inherited from class fr.uga.pddl4j.heuristics.state.RelaxedGraphHeuristic
expandRelaxedPlanningGraph, getMaxValue, getRelaxedPlanValue, getSumValue, isGoalReachable, setGoal
Methods inherited from class fr.uga.pddl4j.heuristics.state.AbstractStateHeuristic
getActions, getGoal, getRevelantFacts, isAdmissible, setAdmissible
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface fr.uga.pddl4j.heuristics.state.StateHeuristic
isAdmissible
-
Constructor Details
-
AdjustedSum
Creates a newAdjustedSum
heuristic for a specified planning problem.- Parameters:
problem
- the planning problem.- Throws:
NullPointerException
- ifproblem == null
.
-
-
Method Details
-
estimate
Return the estimated distance to the goal to reach the specified state. If the return value isInteger.MAX_VALUE
, it means that the goal is unreachable from the specified state.- Parameters:
state
- the state from which the distance to the goal must be estimated.goal
- the goal expression.- Returns:
- the distance to the goal state from the specified state.
-
estimate
Return the estimated distance to the goal to reach the specified state. If the return value isDOUBLE.MAX_VALUE
, it means that the goal is unreachable from the specified state.- Parameters:
node
- the state from which the distance to the goal must be estimated.goal
- the goal expression.- Returns:
- the distance to the goal state from the specified state.
-