Positive and negative examples can be used to learn any ASP program consisting of normal rules, choice rules, disjunctive rules and hard constraints (up to strong equivalence). As positive and negative examples can only express what should or should not be an answer set of the learned program, they cannot be used to learn weak constraints, which do not affect what is or is not an answer set. Weak constraints create a preference ordering over the answer sets of a program, so in order to learn them we need to give examples of this preference ordering – i.e. examples of which answer sets should be preferred to which other answer sets. These ordering examples come in two forms: brave orderings, which express that at least one pair of answer sets that satisfy the semantic properties of a pair of positive examples are ordered in a particular way; and cautious orderings, which express that every such pair of answer sets should be ordered in that way.
Consider a scenario in which a user is planning journeys from one location to another. All journeys consist of several legs, in which the user may take various modes of transport. Other known attributes of the journey legs are the distance of the leg, and the crime rating of the area (which ranges from 0 – no crime – to 5 – extremely high). By offering the user various journey options, and observing their choices, we can use ILASP to learn the preferences the user is using to make such choices. The options a user could take can be represented using context-dependent examples. Four such examples are shown below. Note that the first argument of the example is a unique identifier for the example. This identifier is optional, but is needed when expressing ordering examples.
#pos(eg_a, {}, {}, { #pos(eg_c, {}, {}, {
leg_mode(1, walk). leg_mode(1, bus).
leg_crime_rating(1, 2). leg_crime_rating(1, 2).
leg_distance(1, 500). leg_distance(1, 400).
leg_mode(2, bus). leg_mode(2, bus).
leg_crime_rating(2, 4). leg_crime_rating(2, 4).
leg_distance(2, 3000). leg_distance(2, 3000).
}). }).
#pos(eg_b, {}, {}, { #pos(eg_d, {}, {}, {
leg_mode(1, bus). leg_mode(1, bus).
leg_crime_rating(1, 2). leg_crime_rating(1, 5).
leg_distance(1, 4000). leg_distance(1, 2000).
leg_mode(2, walk). leg_mode(2, bus).
leg_crime_rating(2, 5). leg_crime_rating(2, 1).
leg_distance(2, 1000). leg_distance(2, 2000).
}). }).
By observing a user’s choices, we might see that the user prefers the
journey represented by eg_a
to the one represented by eg_b
. This can
be expressed in ILASP using an ordering example:
#brave_ordering(eg_a, eg_b, <).
This states that at least one answer set of the learned program combined
with C_a
must be preferred to at least one answer set of the learned
program combined with C_b
(where C_a
and C_b
are the contexts of
the examples eg_a
and eg_b
, respectively, and in this simple case
the background knowledge is empty). The final argument of the brave
ordering is an operator, which says how the answer sets should be
ordered. The operator <
means strictly preferred. It is also
possible to use any of the other binary comparison operators: >
, <=
,
>=
, =
or !=
. For instance, the following example states that the
journeys represented by eg_c
and eg_d
should be equally preferred.
#brave_ordering(eg_c, eg_d, =).
By using several such ordering examples, it is possible to learn weak
constraints corresponding to a user’s journey preferences. For example, the
learning task journey.las
(which is available
here.)
causes ILASP to learn the
following set of weak constraints:
:~ leg_mode(L, walk), leg_crime_rating(L, C), C > 3.[1@3, L, C]
:~ leg_mode(L, bus).[1@2, L]
:~ leg_mode(L, walk), leg_distance(L, D).[D@1, L, D]
These weak constraints represent that the user’s top priority is to minimise the number of legs of the journey in which the user must walk through an area with a high crime rating; their next priority is to minimise the number of buses the user must take; and finally, their lowest priority is to minimise the total walking distance of their journey.
Note that in the given scenario there is always a single answer set of the learned program combined with the context (for each of the given contexts), meaning that brave and cautious orderings coincide. When programs have multiple answer sets, the distinction is important, and cautious orderings are much stronger than brave orderings, expressing that the preference ordering holds universally over all pairs of answer sets that meet the semantic properties of the positive examples.