Justify all answers!
1) Write down the axioms and facts for example below in FOPC:
The taxonomic hierarchy for furniture, contains chairs, which
which in turn contains stuffed chairs and plastic chairs (which are
disjoint). All chairs have 4 legs. Stuffed chairs have padding.
All chairs with padding are comfortable, unless they are broken.
The item of furniture in my office, which I call
"the contraption" is a stuffed chair.
a) Convert the knowledge base into normal form
b) Show how this knowledge-base should be indexed - using
a table.
c) Prove or disprove by using resolution (or argue that neither
can be done):
1) The contraption is a chair.
2) The contraption is comfortable.
3) The contraption has padding.
4) If the contraption is a plastic chair, then is a dog.
5) The contraption has more than 1 leg.
SHOW ALL STEPS, including the relevant unifiers, etc.
d) Would using INPUT RESOLUTION only change the answers to c?
e) Can you prove all the above using only either forward or
backward chaining?
2) Consider the following variables and their immediate causes (parents):
variable parents
-------------------------------
A none
E F
B A
C A
D none
F B C D
a) Is this network a poly-tree?
b) Determine the truth of the following independence
statements (using d-separation):
1) I({A}, {D} | {})
2) I({A}, {D} | {F})
3) I({A}, {D} | {F, B, C})
4) I({A}, {E} | {F})
5) I({A}, {C, F} | {B})
6) I({A}, {F} | {B, C})
c) Determine at least 2 different ways to convert the network into
a tree using clustering.
d) Suppose that P(A) = 0.9, P(D) = 0.3, P(B|A) = 0.1, P(B|~A) = 0.8,
P(C|A) = 1, P(C|~A) = 0.5.
Find P(A| B, C, E)
3) You need to construct a 4-input neural network that has value 1 in the
output just when exactly 2 or 3 of its inputs are 1, and 0 otherwise
(assume inputs are in {0, 1}).
a) Can this be done without hidden units?
b) Show a network using threshold elements that performs
the required computation (specify the weights, too!)
4) a) Using the perceptron learning rule (the "delta" rule), show the steps
of learning 3-input function OR with a single perceptron
and a step activation function with threshold 1. Initially all
weights are 1, and the learning rate is 0.4.
b) Is it possible to combine back-propagation with genetic algorithms, and
if so, how?
5) a) Construct by hand (using the greedy search using the best information
gain heuristic) a decision tree for the following table, a set of
examples for considering desirability of a potential employee in
a high-tech company. Data is degree level (A),
existence of work experience (B),
self-motivation (C), and last grade average (D).
input variables decision
A B C D
--------------------------------------------
1 T F 85 good
1 T F 80 good
1 T F 70 good
1 T T 70 good
1 F F 90 good
1 F T 80 bad
1 F T 85 bad
2 T F 70 average
2 T F 80 average
b) After constructing the tree, one can try to prune nodes from the
bottom up, based on least information gain. Which node would be pruned
first in this case?
c) Is a more compact decision tree possible? Which tree?
Deadline: Sunday, January 12, 2003 at 10 AM.
Note: submission (as well as WORK) on this exercise is SOLO (that is, NOT in pairs or other groups of size > 1)