Introduction to Artificial Inteligence

Assignment 6


Written assignment: inference and learning

Justify all answers!

1) Write down the axioms and facts for example below in FOPC:
   The taxonomic hierarchy for operating systems contains WINDOWS and
   LINUX type systems, which are disjoint and exhaustive.
   WINDOWS type systems contain Windows XP and Windows NT systems.
   All windows operating systems are both user-friendly and have
   lousy security. The system on my computer is LINUX. I hate all systems
   with lousy security. LINUX has good security. I always either
   hate an operating system or like it, but never both.
   Security of an OS cannot be both good and lousy.

   a) Convert the knowledge base into CNF.
   b) Show how this knowledge-base should be indexed - using
      a table.
   c) Prove or disprove by using resolution  (or argue that neither 
      can be done):
      1) I hate all WINDOWS systems.
      2) If the system on my machine is WINDOWS NT, then I like it.
      3) I do not hate the operating system on my computer.
      4) If I like all systems with good security, then I like the operating
         system on my computer.
      5) If a system is user-friendly then it has lousy security.
      SHOW ALL STEPS, including the relevant unifiers, etc.
   d) Would using INPUT RESOLUTION only change the answers to c?
   e) Convert the KB into implication normal form.
   f) Can you prove all the above using only either forward or
      backward chaining?

2) Consider the following variables and their immediate causes (parents):

     variable         parents
-------------------------------
        A               none
        E               none
        B               A
        C               E
        D               B C
        F               B C

    a) Is this network a poly-tree?
    b) Is this network directed-path singly connected?
    c) Determine the truth of the following independence
       statements (using d-separation):
       1)  I({A}, {E} | {})
       2)  I({A}, {E} | {D})
       3)  I({A}, {E} | {D, B})
       4)  I({A}, {E} | {D, B, F})
       5)  I({A}, {D, F} | {B})
       6)  I({A}, {F} | {B, C})
    d) Determine at least 2 different ways to convert the network into
       a poly-tree using cutset conditioning.
    e) Suppose that P(A) = 0.8, P(E) = 0.2, P(B|A) = 0.1, P(B|~A)  = 0.8.
       Find P(A|E)

3) You need to construct a 4-input neural network that has value 1 in the 
   output just when an even number of its inputs are 1, and 0 otherwise
   (assume inputs are in {0, 1}).
   a) Can this be done without hidden units?
   b) Show a network using threshold elements that performs
      the required computation (specify the weights, too!)

4) a) Using the perceptron learning rule (the "delta" rule), show the steps
      of learning 3-input function AND with a single perceptron
      and a step activation function with threshold 1. Initially all
      weights are 0.7, and the learning rate is 0.5.

   b) Show (by example) that the order of presenting the examples can
      change the number of training episodes.

   c) Is it possible to combine back-propagation with genetic algorithms, and
      if so, how?


5) a) Construct by hand (using the greedy search using the best information
      gain heuristic) a decision tree for the following table, a set of
      examples for considering desirability of a potential employee in
      a high-tech company. Data is degree level (A),
      existence of work experience (B),
      self-motivation (C), and  last grade average (D).

     input variables               decision
   A      B      C      D
--------------------------------------------
   1      T      F      95             good
   1      T      F      80             good
   1      T      F      70             good
   1      T      T      70             good
   1      F      F      90             good
   2      F      T      80             bad
   2      F      T      95             bad
   2      T      F      70             average
   2      T      F      80             average

   b) Is a more compact decision tree possible? If yes, which tree?

6) A) Use situation calculus (i.e. the result function, and fluents) to
   write down the axioms for the following simplified domain.
   The robot can grab an item, and as a result it will hold it.
   The robot can put item A inside item B, but only if it does not
   hold an item. In the initial situation, S0, the robot holds only item A.

   Actions are: 
   a) Insert: will cause item A to be inside item B, but only if the robot
      holds no items. 
   b) Grab(item1): if the robot is not holding any other item, the
      result will be that the robot will hold item1, and any other item
      inside item1.
   c) Release: as a result of this action, the robot will hold nothing

    Fluents (situation-dependent predicates) are: 
        Inside(item1, item2, s)         (Item1 is inside item2)
        Holds(item, s)                  (The robot holds item)
        Success(s)                      (The robot holds both A and B)

    B) Now, you need to prove that, for the initial situation S0,
       that the action sequence:
            Release, Insert, Grab(B)
       results in success, that is, prove that:
         Success(result(Climb, result(Insert, result(Grab(B), S0)))) 

    Can you do it with your original formulation (show the proof!).
    If not, add the requisite axioms and complete the proof.



Deadline: Sunday, January 18, 2004, at 11AM.

Note: submission (as well as WORK) on this exercise is SOLO (that is, NOT in pairs or other groups of size > 1)