Assignment 2

Due: Mon 28 Dec 2015 Midnight

Natural Language Processing - Fall 2016 Michael Elhadad

This assignment covers the topic of statistical distributions, regression and classification. The objective is:

  1. Experiment and evaluate classifiers for the tasks of named entity recognition and document classification.
  2. Explore the task of Document Classification, comparing email spam detection, SMS spam detection and news categorization.
  3. Explore the problem of domain adaptation by comparing the performance of classifiers trained in one domain when tested in another.
  4. Explore the task of Named Entity Recognition (NER), which features work for this task, and which classifier algorithms help - logistic regression, Naive Bayes and HMM.
  5. Use pre-trained word embeddings and measure whether they help for the task of NER.

Make sure you have installed scikit-learn and pandas to work on this assignment.

Submit your solution by email in the form of an iPython ipynb file. Images should be attached as PNG or JPG files. The whole code should also be submitted as a separate folder with all necessary code to run the questions separated in clearly documented functions from a standalone Python shell, with nltk, scipy and numpy pre-installed.


Q1. Document Classification

Q1.1. Reuters Dataset

Execute the notebook tutorial of Scikit-Learn on text classification: out of core classification. Your task:
  1. Turn the code of the Sklearn tutorial above into a notebook.
  2. Explore how many documents are in the dataset, how many categories, how many documents per categories, provide mean and standard deviation, min and max. (Hint: use the pandas library to explore the dataset, use the dataframe.describe() method.)
  3. Explore how many characters and words are present in the documents of the dataset.
  4. Explain informally what are the classifiers that support the "partial-fit" method discussed in the code.
  5. Explain what is the hashing vectorizer used in this tutorial. Why is it important to use this vectorizer to achieve "streaming classification"?

Q1.2 Spam Dataset

Execute the notebook on Spam detection prepared in spam-classification (download the notebook). Your task:
  1. The vectorizer used in Zac Stewart's code is a CountVectorizer with unigrams and bigrams. Report the number of unigrams and bigrams used in this model.
  2. What are the 50 most frequent unigrams and bigrams in the dataset?
  3. What are the 50 most frequent unigrams and bigrams per class (ham and spam)?
  4. List the 20 most useful features in the Naive Bayes classifier to distinguish between spam and ham (20 features for each class). (See Document classification for 20 Newsgroup example for a method that identifies the top-10 features in a linear classifier.)
  5. There seems to be an imbalance in the length of spam and ham messages (see the plot in the attached notebook). We want to add a feature based on the number of words in the message in the text representation. Should the length attribute be normalized before fitting the Naive Bayes classifier? (See Sklearn pre-processing for examples.) Do you expect Logistic Regression to perform better with the new feature? Explain.
  6. Add the document length as a feature to the model. Does this new feature help? Use Sklearn FeatureUnion to combine the output of different vectorizers into a single vector. See for example Pipelines of feature unions by Zac Stewart, and Feature Union with Heterogeneous Data Sources from SkLearn's documentation.

Q1.3 SMS Spam Dataset

Execute the notebook on SMS Spam detection available in Practical Data Science in Python by Radim Rehurek (notebook and data available here). Your task:
  1. Test the classifier trained on email data in 1.2 on the SMS data.
  2. Test the SMS classifier trained on the email data.
  3. Measure the vocabulary and bigram mismatch between the 2 datasets: that is, find how many features are shared between the 2 datasets, appear only in one and only in the other.

Q2. Named Entity Recognition

Named Entity Recognition

The task of Named Entity Recognition (NER) involves the recognition of names of persons, locations, organizations, dates in free text. For example, the following sentence is tagged with sub-sequences indicating PER (for persons), LOC (for location) and ORG (for organization):
Wolff, currently a journalist in Argentina, played with Del Bosque in the final years of the seventies in Real Madrid.

[PER Wolff ] , currently a journalist in [LOC Argentina ] , played with [PER Del Bosque ] in the final years of the seventies in [ORG Real Madrid ] .
NER involves 2 sub-tasks: identifying the boundaries of such expressions (the open and close brackets) and labelling the expressions (with tags such as PER, LOC or ORG). This sequence labelling task is mapped to a classification tag, using the BIO encoding of the data:
        Wolff B-PER
            , O
    currently O
            a O
   journalist O
           in O
    Argentina B-LOC
            , O
       played O
         with O
          Del B-PER
       Bosque I-PER
           in O
          the O
        final O
        years O
           of O
          the O
    seventies O
           in O
         Real B-ORG
       Madrid I-ORG
            . O

Dataset

The dataset we will use for this question is derived from the CoNLL 2002 shared task - which is about NER in Spanish and Dutch. The dataset is included in the NLTK distribution. Explanations on the dataset are provided in the CoNLL 2002 page.

To access the data in Python, do:

from nltk.corpus import conll2002

etr = conll2002.chunked_sents('esp.train') # In Spanish
eta = conll2002.chunked_sents('esp.testa') # In Spanish
etb = conll2002.chunked_sents('esp.testb') # In Spanish

dtr = conll2002.chunked_sents('ned.train') # In Dutch
dta = conll2002.chunked_sents('ned.testa') # In Dutch
dtb = conll2002.chunked_sents('ned.testb') # In Dutch
The data consists of three files per language (Spanish and Dutch): one training file and two test files testa and testb. The first test file is to be used in the development phase for finding good parameters for the learning system. The second test file will be used for the final evaluation.

Q2.1 Features

Your task consists of:
  1. Choosing good features for encoding the problem.
  2. Encode your training dataset.
  3. Run a classifier over the training dataset.
  4. Train and test the model.
  5. Perform error analysis and fine tune model parameters on the testa part of the datasets.
  6. Perform evaluation over the testb part of the dataset, reporting on accuracy, per label precision, per label recall and per label F-measure, and confusion matrix.
Here is a list of features that have been found appropriate for NER in previous work:
  1. The word form (the string as it appears in the sentence)
  2. The POS of the word (which is provided in the dataset)
  3. ORT - a feature that captures the orthographic (letter) structure of the word. It can have any of the following values: number, contains-digit, contains-hyphen, capitalized, all-capitals, URL, punctuation, regular.
  4. prefix1: first letter of the word
  5. prefix2: first two letters of the word
  6. prefix3: first three letters of the word
  7. suffix1: last letter of the word
  8. suffix2: last two letters of the word
  9. suffix3: last three letters of the word

For example, given the following toy training data, the encoding of the features would be:

        Wolff NP  B-PER
            , ,   O
    currently RB  O
            a AT  O
   journalist NN  O
           in IN  O
    Argentina NP  B-LOC
            , ,   O
       played VBD O
         with IN  O
          Del NP  B-PER
       Bosque NP  I-PER
           in IN  O
          the AT  O
        final JJ  O
        years NNS O
           of IN  O
          the AT  O
    seventies NNS O
           in IN  O
         Real NP  B-ORG
       Madrid NP  I-ORG
            . .   O

Classes
1 B-PER
2 I-PER
3 B-LOC
4 I-LOC
5 B-ORG
6 I-ORG
7 O

Feature WORD-FORM:
1 Wolff
2 ,
3 currently
4 a
5 journalist
6 in
7 Argentina
8 played
9 with
10 Del
11 Bosque
12 the
13 final
14 years
15 of
16 seventies
17 Real
18 Madrid
19 .

Feature POS
20 NP
21 ,
22 RB
23 AT
24 NN
25 VBD
26 JJ
27 NNS
28 .

Feature ORT
29 number
30 contains-digit
31 contains-hyphen
32 capitalized
33 all-capitals
34 URL
35 punctuation
36 regular

Feature Prefix1
37 W
38 ,
39 c
40 a
41 j
42 i
43 A
44 p
45 w
46 D
47 B
48 t
49 f
50 y
51 o
52 s
53 .
Given this encoding, we can compute the vector representing the first word "Wolff NP B-PER" as:
# Class: B-PER=1
# Word-form: Wolff=1
# POS: NP=20
# ORT: Capitalized=32
# prefix1: W=37
1 1:1 20:1 32:1 37:1
When you encode the test dataset, some of the word-forms will be unknown (not seen in the training dataset). You should, therefore, plan for a special value for each feature of type "unknown" when this is expected.

Instead of writing the code as explained above, use the Scikit-learn vectorizer and pipeline library. Learn how to use the DictVectorizer for this specific task. Hashing vs. DictVectorizer also provides useful background.

You can start from the following example notebook CoNLL 2002 Classification with CRF. You do not need to install Python-CRFSuite - just take this notebook as a starting point to explore the dataset and ways to encode features.

We implement here a version of NER which is based on "greedy tagging" (that is, without optimizing the sequence of tags as we would obtain by training an HMM or CRF model). Train the model using a logistic regression classifier and experiment with better features - looking at the tags of the previous word, the previous word and the following word (add padding words in the vectorizer).

Q2.2 Using Word Embeddings (Optional)

One way to improve a greedy tagger for NER is to use Word Embeddings as features. A convenient package to manipulate Word2Vec word embeddings is provided in the gensim package by Radim Rehurek. To install it, use:
# conda install gensim
You must also download a pre-trained word2vec word embedding model from the Word2Vec site. The largest model is the GoogleNews-vectors-negative300.bin (1.5GB compressed file). To load it in Python use the following code (this requires about 8GB of RAM on your machine to work properly):
from gensim.models import word2vec
model_path = "GoogleNews-vectors-negative300.bin"
model = word2vec.load_word2vec_format(model_path, binary=True)

# A dense vector of 300 dimensions representing the word 'queen'
print(w["queen"])

stringA = 'woman'
stringB = 'king'
stringC = 'man'
print(model.most_similar(positive=[stringA, stringB], negative=[stringC], topn=10))
Your task:
  1. Add the word2vec embeddings as dense vectors to the features of your NER classifier for each word feature (current word, previous word, next word).
  2. Retrain the model and report on performance.



Last modified 13 Dec, 2015