Introduction to Artificial Inteligence
Assignment 3 - solutions
Homework assignment - simulators, agents, search and logic
1) For each of the following domains, which type of agent is appropriate
(table lookup, reflex, goal based, or utility based):
a) An agent that solves the 8 puzzle.
We can use any search-based agent, but all it needs is the
initial situation, so reflex is OK. In fact, the problem size is
such that TABLE LOOKUP is possible.
b) An agent that plays chess.
Again, a search-based agent, but in this case table lookup is
NOT reasonable. Reflex is possible, but goal-based is better.
Taking into account the TIMING issue, you will need a UTILITY-BASED
agent!
c) An agent that can pass the Turing test.
Will probably need to be utility-based. Anything less will not be
able to remember past actions correctly, and there are no clear
goals that can be formalized for this agent.
d) An autonomous robot for exploring Mars
Also, utility-based. If intermitent human-intervension is the
mode of operation, goal-based will suffice.
EXPLAIN each choice in about 2 sentences.
2) We are given a search problem that has a path cost equal to the number
of operators applied on the path to reach the goal (i.e. the path length,
as in, e.g. the 8 puzzle). The branching factor is bounded by b (i.e. is
never more than b). We are given an admissible heuristic h, that is
never wrong by more than 1. There is exactly ONE optimal solution. How
many expansion steps are needed in A* to find an optimal solution of
length d, in the worst case, if:
1) There is NO other solution of length less than d+2
2) There are k other solutions of length d+1
You need to PROVE your answer!
As defined above, the heuristic is either exact or underestimates
the number of required steps by 1. Thus:
1) In case 1, the number of expansion steps is exactly d - that is, a node
not on the optimal solution path is never expanded.
Proof: suppose n is
the a node expanded that is not on the optimal solution path, that
has been expanded by A*. Denote the depth of n by g(n). Now clearly there
is a path from n to the goal of length <= h(n)+1.
There is thus a path to
the goal through n of length l(n) <= h(n)+g(n)+1 which is not the optimal
solution, and thus l(n) >= d+2, and f(n) >= d+1
But for every node n' on the optimal path, we have f(n')= h(n')+g(n') <= d
since h(n') is off by 1 at most. We thus have f(n') < f(n), which means
n could NOT have been expanded before all the nodes on the optimal have
been expanded (including the goal!) - a contradiction.
2) In case 2, likewise, every node n expanded not on the optimal solution
path must have f(n) <= d. Now, every such node must also lead to a
solution of length d+1. Nodes leading to solutions with length d+2
are never expanded - so each such node n creates a trail with no
additional "false trails" to a solution. At worst, all these nodes
are at the top level of the tree, and there cannot be more than k of
them. Thus, the total number of expanded nodes is at most (k+1)*d.
3) Invent a heuristic function for the cannibals and missionaries problem
that sometimes over-estimates, and show a case where it can lead to a
sub-optimal solution where the path cost is the number of moves (or argue
that such a case does NOT exist DESPITE the over-estimate)!
Assuming tha the path cost is number of boat trips, clearly the number
of people on the wrong shore is not admissible - it overestimates
the number of required trips when e.g. the boat and 2 cannibals are
on the wrong shore (heuristic function returns 2, actual required is 1).
In this case, the inadmissible heuristic does not cause an unoptimal
solution. This can be seen by mapping the entire problem space - it
has 32 states (i, j, l) with 0 <= i, j <= 3 and l is left or right.
Of these, 12 states are on an optimal solution path, 12 are illegal,
and 4 ((3c, 3m, L), (3c, 0m, R) and their symmetric states) are
unreachable. Of the remaining 4, 2 are on alternate optimal solution
paths: (2c 2m R) is an alternate for (1c 3m R), and symetrically
(1c 1m L) is an alternate for (2c 0m L). The last 2 ((2c 3m R) and its
symmetric state) are dead ends. Since loops will not emerge as optimal
in A*, we still get the optimal solution.
4) Give an example of a 4-ply game tree (branching factor 2)
where alpha-beta pruning saves NOTHING, and another where the savings
is maximal. How many nodes are pruned in each case?
The following tree is already sorted, thus has optimal pruning (lying on is
side, for representability in ASCII). Visited nodes are shown as 0,
pruned nodes as X.
MAX MIN MAX MIN
0----------0-----0--0-0 11
\ 11 11 \ 11\ \0 12
\ \ \
\ \ 0-0 9
\ \ \X 10
\ \
\ 0--0-0 15
\ >=15 \ \0 16
\ \
\ X-X 13
\ \X 14
0-----0--0-0 3
<=3\ <=3\ \X 4
\ \
\ 0-0 1
\ \X 2
\
X--X-X 7
\ \X 8
\
X-X 5
\X 6
If the tree were in REVERSE order, nothing is pruned - need to look at ALL
nodes.
5) Use truth tables to show that the following sentences are valid:
a) (P <=> Q) <=> ((~P v Q) ^ (P v ~Q))
P Q P<=>Q ~PvQ Pv~Q (~PvQ)^(Pv~Q)
---------------------------------------------------
F F T T T T
F T F T F F
T F F F T F
T T T T T T
b) ~(P v Q) <=> (~P ^ ~Q)
P Q PvQ ~P^~Q
-----------------------------
F F F T
F T T F
T F T F
T T T F
c) (P => Q) <=> ~(P ^ ~Q)
P Q P=>Q P^~Q
-----------------------------
F F T F
F T T F
T F F T
T T T F
6) Which of the following are valid, satisfiable, or neither:
a) Good => Good VALID (equal to ~Good v Good)
b) Good => Bad SATISFIABLE (e.g. Bad is true)
b) Bad => Good SATISFIABLE
c) (Bad => ~Good) => (Good => ~Bad) VALID
d) (Good => ~Good) SATISFIABLE
e) Big v Dumb v (Big => Dumb) VALID
f) Good v Bad v Ugly SATISFIABLE
7) Represent the following sentences in FOPC - define the vocabulary first.
a) Not all students who pass algorithms pass compilers.
Use predicate: pass(student, course), constants: algorithms, compilers
~ forall x pass(x, algorithms) -> pass(x, compilers)
b) Only one student passed Algorithms. (Assuming at least one DID pass!)
Using same predicates and constants:
Exists x pass(x, algorithms) ^ (forall y pass(y, algorithms) -> x=y)
c) Some polititians can fool all of the people all of the time.
Predicates: politician(x), person(x), can_fool(x, y, time), time(t)
exists x politician(x) ^
forall y, t (person(y) ^ time(t)) -> can_fool(x, y, t)
d) Every bird has 2 wings.
Predicates: has_part(object, part), bird(object), wing(object)
forall x bird(x) -> exists y, z has_part(x, y) ^ wing(y) ^
has_part(x, z) ^ wing(z) ^ ~(y=z) ^
forall w (has_part(x, w) ^ wing(w))
-> (w=y v w=z)
e) There is a barber who shaves all men in town who do not shave
their barber.
Predicates: barber(x), shaves(x, y), in_town(x)
Function: barber_of(x)
exists x barber(x) ^
forall y (in_town(y) ^ ~shaves(y,barber_of(y)) -> shaves(x,y)
8) Write FOL sentences that define the effects of the "grab" action in the
wumpus world.
forall x, y, s holding(agent, gold, result(grab, s)) <-
at(agent, x, y, s) ^ at(gold, x, y, s)
9) In situation S0, The robot is in room R1, which has box B1,
The move action moves the robot. If the robot has the box,
the box moves with the robot.
The pickup action results in the robot holding the box (has_box),
if the box is in the same room as the robot.
a) Write down the axioms for this mini-domain.
b) Can you prove that in_room(B1, R2, result(move, result(pickup, S0)))
with your axioms? If yes, do so. If not, what is missing?
c) Can you prove that in_room(B1, R2, result(move, result(wait,
result(pickup, S0))))
with your axioms? If yes, do so. If not, what is missing?
Predicates are: has_box(situation), in_room(object, room, situation).
Actions are: pickup, move, wait.
a) Axiom 1: picking up the box whwn in same room
forall x, s has_box(result(pickup, s)) <-
in_room(B1, x, s) ^ in_room(robot, x, s)
Axiom 2: move results in moving the robot to room R2
forall s in_room(robot, R2, result(move, s))
Axiom 3: move does not affect the robot's holding the box
forall s has_box(result(move, s)) <-> has_box(s)
Axiom 4: if the robot is holding the box, the box is in the
same room as the robot.
forall s, r has_box(s) ^ in_room(robot, r, s) -> in_room(B1, r, s)
For situation S0 we have:
in_room(robot, R1, S0) ^ in_room(B1, R1, S0)
b) Yes, because from the initial conditions and axiom 1 we get
has_box(result(pickup, S0))
using axiom 2 with the above we get:
in_room(robot, R2, result(move, result(pickup, S0)))
and using axiom 3 we get that the robot still has the box
has_box(result(move, result(pickup, So))
Now, with axiom 4 and the last 2 facts we get:
in_room(B1, R2, result(move, result(pickup, S0)))
c) The extra action wait is has no axioms, so we can say NOTHING about
what is true in the resulting situation. We need to add the axiom that
wait does not change has_box():
forall s has_box(s) -> has_box(result(wait, s))
Also, if the action sequence were
pickup, move, wait, then we would also need to say that wait does
not move the robot! In general, we will need to have axioms about
what all actions do NOT do, as well as what they do - and the
representation in FOL is not very compact as a result! (The FRAME
PROBLEM).