Assignment 1

Due: Tuesday 22 March 2011 Midnight

Natural Language Processing - Spring 2011 Michael Elhadad

This assignment covers the topic of statistical models of parts of speech tagging.

Submit your solution in the form of an HTML file, using the same CSS as this page with code inside <pre> tags. Images should be submitted as PNG or JPG files. The whole code should also be submitted as a separate folder with all necessary code to run the questions separated in clearly documented functions.

Parts of Speech Tagging

  1. Data Exploration
    1. Gathering and cleaning up data
    2. Gathering basic statistics
  2. Fine-grained Error Analysis
    1. Known vs. Unknown Accuracy
    2. Per Tag Precision and Recall
    3. Confusion Matrix
    4. Sensitivity to the Size and Structure of the Training Set: Cross-Validation
    5. Stratified Samples
  3. Simplified Tags to Full Tags

Data Exploration

Gathering and Cleaning Up Data

When we discussed the task of POS tagging in class, we assumed the text comes in a "clean" form: segmented in sentences and in words. We ran experiments on a clean corpus (correctly segmented) and obtained results of about 90% accuracy. In this question, we want to get a feeling of how difficult it is to clean real-world data. Please read the tutorial in Chapter 3 of the NLTK book. This chapter explains how to access "raw text" and clean it up: remove HTML tags, segment in sentences and in words.

Lookup at the data of the Brown corpus as it is stored in the nltk_data folder (by default, it is in a folder named like C:\nltk_data\corpora\brown under Windows). The format of the corpus is quite simple. We will attempt to add a new "section" to this corpus.

Look at the following Python Library for Google Search. This library allows you to send queries to Google and download the results in a very simple manner in Python. To install this library, just open the zip file from the download site in your Python library (under /python/lib/xgoogle). Test the library by running a simple test such as:

from xgoogle.search import GoogleSearch, SearchError
try:
  gs = GoogleSearch("quick and dirty")
  gs.results_per_page = 50
  results = gs.get_results()
  for res in results:
    print res.title.encode('utf8')
    print res.desc.encode('utf8')
    print res.url.encode('utf8')
    print
except SearchError, e:
  print "Search failed: %s" % e

Choose a query to execute using the xgoogle package and gather about 10 hits from this site. Download the pages that your query found. Use code similar to this script to clean up the HTML of these pages:

url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
raw = nltk.clean_html(html)
tokens = nltk.word_tokenize(raw)
Save the resulting hits into clean text files. Then run the best POS Tagger you have available from class on the resulting text files, using the simplified POS Brown tagset (19 tags). Save the resulting tagged file into text files in the same format expected by the Brown corpus. You should gather about 50 sentences. Look at the Python code under \Python27\Lib\site-packages\nltk\corpus\reader\tagged.py to see explanations on how the nltk Brown corpus reader works or read Chapter 3 of the book Overview of Python Text Processing with NLTK 2.0, Jacob Perkins, 2010.

Finally, manually review the tagged text and fix the errors you find. Put the manually tagged file into the nltk_data Brown corpus folder into one of the existing category (or if you are more ambitious in a new category in addition to 'news', 'editorial'...). Make sure the nltk corpus reader can read the new text you have just added to the Brown corpus.

Review the tagging of the new text separately (2 analyses) and compare your tagging results. Do you reach agreement on how to tag the text? Show the differences between each of your tags and the tags produced by the automatic tagger. Report how long it took you to check the tagging of 50 sentences.

Report qualitatively on the errors you observe during this pipeline:

  1. Errors met while dealing with the Google engine
  2. Errors met while downloading the material from the Google hits
  3. Errors met while cleaning up the HTML pages
  4. Errors met while segmenting the text into sentences and words
  5. Errors met by the automatic tagger

Gathering Basic Statistics

When we use a tagger that relies on lexical information (for each word form, what is the distribution of tags that can be assigned to the word), a measure of the complexity of the POS task is related to the level of ambiguity of the word forms. In this question, we want to explore the level of ambiguity present in our dataset. For all of this question, use the full Brown corpus distributed as part of NLTK.

Write a function that plots the number of words having a given number of tags. The X-axis should show the number of tags and the Y-axis the number of words having exactly this number of tags. Use the following example from the NLTK book as an inspiration:

def performance(cfd, wordlist):
    lt = dict((word, cfd[word].max()) for word in wordlist)
    baseline_tagger = nltk.UnigramTagger(model=lt, backoff=nltk.DefaultTagger('NN'))
    return baseline_tagger.evaluate(brown.tagged_sents(categories='news'))

def display():
    import pylab
    words_by_freq = list(nltk.FreqDist(brown.words(categories='news')))
    cfd = nltk.ConditionalFreqDist(brown.tagged_words(categories='news'))
    sizes = 2 ** pylab.arange(15)
    perfs = [performance(cfd, words_by_freq[:size]) for size in sizes]
    pylab.plot(sizes, perfs, '-bo')
    pylab.title('Lookup Tagger Performance with Varying Model Size')
    pylab.xlabel('Model Size')
    pylab.ylabel('Performance')
    pylab.show()
Write a Python function that finds words with more than N observed tags. The function should return a ConditionalFreqDist object where the conditions are the words and the frequency distribution indicates the tag frequencies for each word.

Write a test function that verifies that the words indeed have more than N distinct tags in the returned value.

Write a function that given a word, finds one example of usage of the word with each of the different tags in which it can occur.

# corpus can be the tagged_sentences or tagged_words according to what is most convenient
>>> PlotNumberOfTags(corpus)

...show a plot with axis: X - number of tags (1, 2...) and Y - number of words having this number of tags...

>>> cfd = MostAmbiguousWords(corpus, 4)
<conditionalFrequency ...>

>>> TestMostAmbiguousWords(cfd, 4)
All words occur with more than 4 tags.

>>> ShowExamples('book', cfd, corpus)
'book' as NN: ....
'book' as VB: ....
We expect this distribution to exhibit a "long tail" form. Do you confirm this hypothesis?

Fine-Grained Accuracy and Error Analysis

Known vs. Unknown Accuracy

In the review of the taggers done in class, we reported the accuracy of each tagger using the TaggerI.evaluate() method. This method computes the average number of words correctly tagged in a test dataset.

We will now investigate more fine-grained accuracy metrics and error analysis tools.

One of the most challenging task for taggers that learn from a training set is to decide how to tag unknown words. Implement a function evaluate2(training_corpus) in the TaggerI interface that reports accuracy for a trained tagger for known words and for unknown words. (Propose a design to identify known words for a trained tagger. Specify in details what it means that a chain of backoff taggers "know" a word in their training.)

Test the evaluate2() method on each of the taggers discussed in class.

Per Tag Precision and Recall

We are interested in checking the behavior of a tagger per tag. This indicates which tags are most difficult to distinguish from other tags. Write a function that reports precision and recall of a tagger per tag. These measures are defined as follows:
  1. Precision for tag T: when the tagger predicts tag T, how often is it correct in the dataset.
  2. Recall for tag T: out of all words tagged as T in the dataset, how many are tagged correctly.
Precision and Recall per tag can be computed as a function of the true positive, true negative, false positive and false negative counts for the tags:
  1. True positive count (TP): number of words tagged as T both in the test set and by the tagger.
  2. True negative count (TN): words tagged as non-T both in the test set and by the tagger.
  3. False positive count (FP): words tagged as non-T in the test set and as T by the tagger.
  4. False negative (FN): words tagged as T in the test set and as non-T by the tagger.
Since there is a natural trade-off between Precision and Recall, we often measure a score that combines the two parameters and is called F-measure. The formula are:
Precision(T) = TP / TP + FP

Recall(T) = TP / TP + FN

F-Measure(T) = 2 x Precision x Recall / (Recall + Precision) = 2TP / (2TP + FP + FN)
All three measures are numbers between 0 and 1.

Add the function MicroEvaluate(corpus_test) to the TaggerI interface that computes for the tagger TP, TN, FP, FN, Precision, Recall and F-measure.

Propose a method to test these functions (think of extreme cases of taggers that would produce results with expected precisions or recalls).

Which tags are most difficult in the simplified tagset? In the full tagset?

Confusion Matrix

A precious method to perform error analysis consists of computing the confusion matrix of a tagger. Consider an error committed by a tagger: a word is predicted as tag T1 where it should be tagged as T2. In other words, tag T2 is confused with T1. Note that confusion is not symmetric.

A confusion matrix tabulates all the mistakes committed by a tagger in the form of a matrix C[ti, tj]. C[ti, tj] counts the number of times the tagger predicted ti instead of tj. (Which NLTK data structure is appropriate for such a value?)

Write a method ConfusionMatrix(corpus_test) that returns such a matrix for a tagger.

Validate the ConfusionMatrix() method over the DefaultTagger discussed in class.

Report the confusion matrix for the full tagset and simplified tagset of the Brown corpus for the last tagger discussed in class. Discuss the results: which pairs of tags are the most difficult to distinguish?

Given your observation on the most likely confusions, propose a simple (engineering) method to improve the results of your tagger. Implement this improvement and report on error reduction.

Sensitivity to the Size and Structure of the Training Set: Cross-Validation

The taggers we reviewed in class were trained on a data set then evaluated on a test set. We will now investigate how the results of the evaluation vary when we vary the size of the training set and the way we split our overall dataset between training and test sets.

We saw above a plot that shows how the accuracy of a unigram tagger improves as the size of the training set increases. Assume we are given a manually tagged corpus of N words. We want to train on a part and test on another. So we split the corpus in 2 parts. How should we split the dataset so that the test training set is a good predictor of actual performance on unseen data?

The first method we will describe is called cross-validation: assume we decide to split our corpus in relative size of 90% training-10% testing. How can we be sure the split on which we test is representative? The cross-validation process consists of splitting the data in 10 subsets of 10% each. We iterate the process of training/testing 10 times, each time withholding one subset of 10% for testing and training on the other 9 subsets. We then report the results of the accuracy as a table with rows: i (iteration number), accuracy(i) -- and summarize with the combined accuracy averaged over the ten experiments. (The same procedure can be performed for any number of splits N.)

Implement a method crossValidate(corpus, n) for trainable taggers. Report the 10-fold cross-validation results for the last tagger discussed in class. Discuss the results.

Stratified Samples

When we perform cross-validation, we split the corpus randomly in N parts. An important issue to consider is whether the corpus contains sentences that are uniformly difficult. Assume there are P classes of sentences in the corpus, each class is more or less difficult to tag. If we sample the test corpus out of the "easy" class, we will unfairly claim high accuracy results. One way to avoid such bias is to construct stratified testing datasets.

The procedure for constructing a stratified dataset consists of identifying P classes, then splitting each class separately. In this question, we will perform stratification over 2 dimensions:

The hypothesis we want to test is whether the length of a sentence (number of words) or its genre affect the results of the tagger.

We first define three classes of sentence-lengths - short, medium and long. To define what is the exact definition of these classes, plot the distribution of sentences by length in the overall Brown corpus (all categories). The plot should show how many sentences occur in the corpus for each observed sentence length. Observe the plot and decide on cutoff values for the classes "short", "medium" and "long". Discuss how you made your decision.

Write a method to construct a stratified dataset given the classes: stratifiedSamples(classes, N=10). The method should return 2 values: the training subset and the test subset, each stratified according to the classes. For example, if N=10, the stratified test subset should contain 10% of each of the classes and the stratified training subset should contain 90% of each of the classes. As a consequence, both training and testing sets contain the same relative proportion of each class.

Perform a cycle of training-testing on the Brown corpus for the last tagger discussed in class for each of the following cases:

  1. Random split 90%-10%
  2. Stratified split 90%-10% according to sentence length (split short/medium/long)
  3. Stratified split 90%-10% according to the sentence genre. The Brown corpus contains sentences in each of the following categories:

Discuss the results you observe.

From Simplified Tags to Full Tags

The NLTK corpus reader allows us to access the Brown corpus tagged with a simplified tagset (of only 19 tags). On the one hand, tagging with less tags is easier - the perplexity of the task is reduced. On the other hand, we have less clues from the context when tagging a new word - so the task may end up a bit harder.

Compare the accuracy obtained on the full tagset and on the reduced tagset for the last tagger discussed in class. Explain the results through representative observations in the error analysis.

Optional

Assume we are given a sentence tagged according to the simplified tagset. How can we infer from this the full tagging of the sentence in the full tagset?

Define a tagger that maps simplified tags to full tags. Explain the intuition driving your method, data observation that supports the intuition in the corpus on a development set, and the learning method you use. Propose a baseline method, then an improved method. Which error reduction do you obtain?


Last modified Mar 22, 2010