Assignment 2

Due: Mon 04 Jan 2021 Midnight

Natural Language Processing - Fall 2021 Michael Elhadad

This assignment covers the topic of document classification, sequence classification, named entity recognition and word embeddings. The objective is:

Submit your solution by email in the form of an iPython ipynb file.

Do not attach the data in your submission. Your notebook should refer to the data folder as "../data".


Content


Q1. Questions Classification

Consider the dataset on Question Classification available here.

Q1.1. Describe the dataset qualitatively

Read the article introducing this dataset: in Li, Dan Roth, Learning Question Classifiers. COLING'02.

Write a half to one-page summary of the paper, focusing on the dataset description (more than on the description of the classifier introduced in the paper). Describe the exact task, the labels used, and provide the motivation for this task. Provide examples for the 6 main categories.

Q1.2. Dataset Reader

Implement a reader to parse the dataset into a data structure that will be easily used for scikit-learn processing. Adapt the code we used in HW1:
import codecs
import math
import random
import string
import time
import numpy as np
from sklearn.metrics import accuracy_score

'''
Define different constants for the task of question classification 
based on the definition of the task.
In the question classification case, there are 2 labels per question: coarse and fine.
'''
coarse_categories = ["ABBREVIATION", "ENTITY", "DESCRIPTION", "HUMAN", "LOCATION", "NUMERIC VALUE"]
fine_categories = {}
fine_categories["ABBREVIATION"] = ["abb", "exp"]
# more here...
  
# Build the category_lines dictionary, a list of names per language
coarse_category_lines = {}
all_categories = []

# @Todo: Define the way the lines should be parsed
def parseLine(line):
    return line

# @Todo: Read a file and split into lines - create the appropriate data structure
def readLines(filename):
    lines = codecs.open(filename, "r",encoding='utf-8', errors='ignore').read().strip().split('\n')
    return [parseLine(line) for line in lines]

Q1.3. Dataset Exploration

The labels used to classify the questions are organized in two levels: The definition of the question labels is provided here.

Provide a quantitative description of the dataset:

  1. Distribution of the question labels (number / percentage) - separately for coarse and fine labels.
  2. Distribution of the number of tokens per question - overall and per label.
  3. Vocabulary size and number of tokens overall and per label.
  4. Top 20 more frequent words overall and per label
  5. Number of words occurring 1,2,3,4 and 5 times
For this type of exploration, the pandas library is extremely convenient. In particular, explore the function dataframe.describe(). You can use other code if you prefer.

Q1.4. Classifier Interface, Evaluation Metrics, Confusion Matrix

Define the Python interface (functions or class according to your preference) of a question classifier so that the function accuracy_score and classification_report from the sklearn.metrics module can be used. Define a function evaluate_classifier that takes a trained classifier and reports classification results for coarse and fine categories. Define a function confusion_matrix(model) which prints a confusion matrix for the coarse level categories in the same way as in HW1 Question 3.

Q1.5 Baseline Classifier

Implement a baseline classifier for the 6 coarse labels using the heuristics described in the paper in Section 2.1 (of the form – If a query starts with Who or Whom: type Human).

Report on the accuracy, precision, recall, and F1 measure for all the coarse labels, and provide the confusion matrix for the 6 coarse labels.

Analyze the errors by listing types of errors (false positives and false negatives for each of the 6 labels).

Q1.6 Features-based Classifier

Implement a feature-based classifier for the 6 coarse labels using the types of features described in the paper Section 3.2: words, POS tags, NER tags.

Use the spacy library to perform pre-processing of the questions - including POS tagging and Named Entity Recognition and Noun Chunks detection. Spacy comes with excellent pre-trained models for English and other languages. Installing Spacy requires the following steps (see spacy documentation):

// This installs the Spacy library (13MB)
% pip install spacy
// This downloads pre-trained models for POS tagging / NER / Noun chunks in English (34MB)
% python -m spacy download en_core_web_sm
% python
> import spacy
> nlp = spacy.load('en_core_web_sm')
> doc = nlp('Apple is looking at buying U.K. startup for $1 billion')
> doc.ents
(Apple, U.K., $1 billion)
> doc.ents[0].label_
'ORG'
Invoking the 'nlp()' function of spacy performs a set of analyses on the text, including: sentence separation, tokenization, lemmatization, parts of speech tagging, Noun-phrase chunking, named entity recognition and syntactic parsing. Information about these analyses is retrieved using the spacy document properties. As indicated in the paper, we want to extract the following information as features for the task of question classification: Here are starting points to learn how to extract this information from the nlp analysis:
  % python
  > import spacy
  > nlp = spacy.load('en_core_web_sm')
  > doc = nlp('Apple is looking at buying U.K. startup for $1 billion')
  
  # Token level features retrieved by Spacy: token, lemma, POS
  > for x in doc:   # Each x is a Token
          print(f"Token: {x} - Lemma: {x.lemma_} - POS: {x.pos_}")
  Token: Apple - Lemma: Apple - POS: PROPN
  Token: is - Lemma: be - POS: AUX
  Token: looking - Lemma: look - POS: VERB
  Token: at - Lemma: at - POS: ADP
  Token: buying - Lemma: buy - POS: VERB
  Token: U.K. - Lemma: U.K. - POS: PROPN
  Token: startup - Lemma: startup - POS: NOUN
  Token: for - Lemma: for - POS: ADP
  Token: $ - Lemma: $ - POS: SYM
  Token: 1 - Lemma: 1 - POS: NUM
  Token: billion - Lemma: billion - POS: NUM

  # Span level features retrieved by Spacy: named entities, start (0-based index), end (index just after the span), category
  > doc.ents
    (Apple, U.K., $1 billion)
  > for e in doc.ents: print(f"{e} - {e.start} - {e.end} - {e.label_}")
  Apple - 0 - 1 - ORG
  U.K. - 5 - 6 - GPE
  $1 billion - 8 - 11 - MONEY

  # Span level features retrieved by Spacy: noun chunks
  > list(doc.noun_chunks)
  [Apple, U.K. startup]
  > for c in doc.noun_chunks: print(f"{c.start} - {c.end} - {c.root}")
  0 - 1 - Apple
  5 - 7 - startup
The paper does not explicitly indicate how to encode the features it lists and is not precise about the features named `related words` (words which are usually associated with a specific type of questions). For example:
  1. Word features can be encoded in different ways: noise words filtered or not, with or without lemmatization, with or without case normalization (all lower-case).
  2. POS features can be encoded in different ways: as a bag of POS-tags, or associated with the word in a bag-of-tagged words such as 'Apple/PROPN'
  3. Chunks can be encoded as a bag of chunk-roots
  4. Examples of "related words" per category are provided for a few categories: profession, mountains and food. You should learn the related words list from the training dataset by detecting words which have a high chi-square value with each category. Read in sklearn.feature_selection.chi2 for a discussion of how such words can be efficiently computed using scikit-learn.

Q1.6.1 Feature Extraction

Discuss a priori what are good ways to encode these features (lemma, POS, NER, chunk, related words) - provide examples that explain your intuition.

Implement a feature extraction function that turns a question into a feature vector appropriate for the scikit-learn classifiers. Adopt the example shown in the scikit-learn documentation: loading features from dicts.

Q1.6.2 Train Models

Train scikit-learn based classifiers for:
  1. Coarse labels
  2. All labels as a flat classifier
  3. A hierarchical classifier which predicts the fine-grained labels given the coarse label as proposed in the paper. Implement this as a two-step procedure - run the coarse-label classifier, then a second level classifier which takes the prediction of the first classifier as input (one finer classifier per coarse category).
For each of the three classifiers, report:
  1. Accuracy, Precision, Recall, F-measure per label and confusion matrix.
  2. Provide examples of prediction errors (positive and negative).
  3. Discuss the most ambiguous label pairs (identified in the confusion matrix) and discuss whether the features you have used provide sufficient information to disambiguate the cases.
You should experiment with different classifiers from those illustrated in the Classification of text documents using sparse features example.

Q1.7 Optional

1.7.1 Analyze which of the features are most helpful for this task among lemma, POS, NER, Chunks and Related Words. (This analysis is called ablation analysis).

1.7.2 The dataset is quite small (5,500 questions in the training dataset for 50 labels). How would you determine whether your model overfits on this data?


Q2. Document Classification

Q2.1. Reuters Dataset

Execute the notebook tutorial of Scikit-Learn on text classification: out of core classification.

Q2.1.1 Descriptive Statistics

Explore how many documents are in the dataset, how many categories, how many documents per categories, provide mean and standard deviation, min and max. (use the pandas library to explore the dataset, use the dataframe.describe() method.) Explore how many characters and words are present in the documents of the dataset.

Q2.1.2 Partial-fit classifiers

Explain informally what are the classifiers that support the "partial-fit" method discussed in the code.

Q2.1.3 Hashing Vectorizer

Explain what is the hashing vectorizer used in this tutorial. Why is it important to use this vectorizer to achieve "streaming classification"?

Q2.2. BBC News Dataset

The Kaggle BBC News dataset is a document dataset to test document classification. It contains 1,500 training documents (news stories from the BBC News) and 700 test documents. Documents are classified into 5 categories: sports, tech, business, entertainment, politics. Text is encoded in the following format: all lower case, quotes are removed and separated, non period punctuations are removed. For example:
lifestyle  governs mobile choice  faster  better or funkier hardware alone is not going to help phone firms sell more handsets
research suggests.  instead  phone firms keen to get more out of their customers should not just be pushing the technology 
for its own sake. consumers are far more interested in how handsets fit in with their lifestyle than they are in screen size  
onboard memory or the chip inside  shows an in-depth study by handset maker ericsson.  
historically in the industry there has been too much focus on using technology   
said dr michael bjorn  senior advisor on mobile media at ericsson s consumer and enterprise lab.
Download the data bbcnews.zip.

Q2.2.1 Dataset Exploration

Explore how many documents are in the dataset, how many categories, how many documents per categories, provide mean and standard deviation, min and max.

Q2.2.2 Features Extraction

Select appropriate features for document classification and implement a scikit-learn vectorizer for this dataset.

Q2.2.3 Model Training and Evaluation

Implement a classifier for this dataset.

Report performance, confusion matrix and analyze errors.

In order to run the test data, you will need to register to Kaggle and use their submission system. To avoid the complexity of using the Kaggle submission system, split the train data into 80% training / 20% test.

You can see examples solving this task with good usage of scikit-learn APIs in the Kaggle leaderboard. In particular, aryan-bbc-news-classification demonstrates data exploration for classification using pandas, tf-idf features, TSNE visualization for feature vectors, and chi-square correlation between features and labels.


Q3. Named Entity Recognition

Named Entity Recognition

The task of Named Entity Recognition (NER) involves the recognition of names of persons, locations, organizations, dates in free text. As we have seen above, Spacy includes a very good NER model as part of its library. In this question, we will study how to implement such a model. The following sentence is tagged with sub-sequences indicating PER (for persons), LOC (for location) and ORG (for organization):
Wolff, currently a journalist in Argentina, played with Del Bosque in the final years of the seventies in Real Madrid.

[PER Wolff ] , currently a journalist in [LOC Argentina ] , played with [PER Del Bosque ] in the final years of the seventies in 
[ORG Real Madrid ] .
NER involves 2 sub-tasks: identifying the boundaries of such expressions (the open and close brackets) and labelling the expressions (with tags such as PER, LOC or ORG). This sequence labelling task is reduced into a classification task, using the BIO encoding of the data:
        Wolff B-PER
            , O
    currently O
            a O
   journalist O
           in O
    Argentina B-LOC
            , O
       played O
         with O
          Del B-PER
       Bosque I-PER
           in O
          the O
        final O
        years O
           of O
          the O
    seventies O
           in O
         Real B-ORG
       Madrid I-ORG
            . O

Dataset

The dataset we will use for this question is derived from the CoNLL 2002 shared task - which is about NER in Spanish and Dutch. The dataset is included in the NLTK distribution. Explanations on the dataset are provided in the CoNLL 2002 page.

To access the data in Python, do:

from nltk.corpus import conll2002

etr = conll2002.chunked_sents('esp.train') # In Spanish
eta = conll2002.chunked_sents('esp.testa') # In Spanish
etb = conll2002.chunked_sents('esp.testb') # In Spanish

dtr = conll2002.chunked_sents('ned.train') # In Dutch
dta = conll2002.chunked_sents('ned.testa') # In Dutch
dtb = conll2002.chunked_sents('ned.testb') # In Dutch
The data consists of three files per language (Spanish and Dutch): one training file and two test files testa and testb. The first test file is to be used in the development phase for finding good parameters for the learning system. The second test file will be used for the final evaluation.

Q3.1 Features

Your task consists of:
  1. Choosing good features for encoding the problem.
  2. Encode your training dataset.
  3. Run a classifier over the training dataset.
  4. Train and test the model.
  5. Perform error analysis and fine tune model parameters on the testa part of the datasets.
  6. Perform evaluation over the testb part of the dataset, reporting on accuracy, per label precision, per label recall and per label F-measure, and confusion matrix.

Here is a list of features that have been found appropriate for NER in previous work:

  1. The word form (the string as it appears in the sentence)
  2. The POS of the word (which is provided in the dataset)
  3. ORT - a feature that captures the orthographic (letter) structure of the word. It can have any of the following values: number, contains-digit, contains-hyphen, capitalized, all-capitals, URL, punctuation, regular.
  4. prefix1: first letter of the word
  5. prefix2: first two letters of the word
  6. prefix3: first three letters of the word
  7. suffix1: last letter of the word
  8. suffix2: last two letters of the word
  9. suffix3: last three letters of the word

For example, given the following toy training data, the encoding of the features would be:

        Wolff NP  B-PER
            , ,   O
    currently RB  O
            a AT  O
   journalist NN  O
           in IN  O
    Argentina NP  B-LOC
            , ,   O
       played VBD O
         with IN  O
          Del NP  B-PER
       Bosque NP  I-PER
           in IN  O
          the AT  O
        final JJ  O
        years NNS O
           of IN  O
          the AT  O
    seventies NNS O
           in IN  O
         Real NP  B-ORG
       Madrid NP  I-ORG
            . .   O

Classes
1 B-PER
2 I-PER
3 B-LOC
4 I-LOC
5 B-ORG
6 I-ORG
7 O

Feature WORD-FORM:
1 Wolff
2 ,
3 currently
4 a
5 journalist
6 in
7 Argentina
8 played
9 with
10 Del
11 Bosque
12 the
13 final
14 years
15 of
16 seventies
17 Real
18 Madrid
19 .

Feature POS
20 NP
21 ,
22 RB
23 AT
24 NN
25 VBD
26 JJ
27 NNS
28 .

Feature ORT
29 number
30 contains-digit
31 contains-hyphen
32 capitalized
33 all-capitals
34 URL
35 punctuation
36 regular

Feature Prefix1
37 W
38 ,
39 c
40 a
41 j
42 i
43 A
44 p
45 w
46 D
47 B
48 t
49 f
50 y
51 o
52 s
53 .
Given this encoding, we can compute the vector representing the first word "Wolff NP B-PER" as:
# Class: B-PER=1
# Word-form: Wolff=1
# POS: NP=20
# ORT: Capitalized=32
# prefix1: W=37
1 1:1 20:1 32:1 37:1
When you encode the test dataset, some of the word-forms will be unknown (not seen in the training dataset). You should, therefore, plan for a special value for each feature of type "unknown" when this is expected.

Instead of writing the code as explained above, use the Scikit-learn vectorizer and pipeline library. General information on feature extraction for text data in Scikit-Learn is in the Scikit-Learn documentation. Refer to the DictVectorizer for this specific task. Hashing vs. DictVectorizer also provides useful background.

Q3.1.1 Feature Extraction

Start from the following example notebook CoNLL 2002 Classification with CRF. You do not need to install Python-CRFSuite - just take this notebook as a starting point to explore the dataset and ways to encode features. (This notebook also gives you an indication of the level of result you can expect to obtain.)

Q3.1.2 Model Training

Train the model using a logistic regression classifier and experiment with better features - looking at the tags of the previous word, the previous word and the following word (add padding words in the vectorizer).

Q3.1.3 Greedy Tagging vs. Sequence Tagging

We implemented above a version of NER which is based on greedy tagging: that is, without optimizing the sequence of tags as we would obtain by training an HMM or CRF model. In particular, we did not check that the BIO tags produced by the tagger is a legal sequence. Write code to identify sequences of BIO tags which are illegal and report on the frequency of this problem for each type of illegal tags transition (O-IX, IX-IY, BX-IY). Comment on your observations.

Q3.2 Using Word Embeddings

One way to improve a greedy tagger for NER is to use Word Embeddings as features. A convenient package to manipulate Word2Vec word embeddings is provided in the gensim package by Radim Rehurek. To install it, use:
# conda install gensim
You must also download a pre-trained Word2Vec or fastText word embedding model. The models must naturally be in Spanish or Dutch. (Only test word embeddings for one language.) You can find pre-trained word embedding models in different formats:
  1. fastText pretrained models (includes models for 294 languages)
  2. Spanish Word2vec models
Specific information on manipulating word vectors with Gensim is provided in Gensim with the KeyedVector. Practical examples are available for Spanish in this notebook. (Pay attention that word embeddings large are pretty big files - about 3GB when uncompressed.) Your task:
  1. Add word embeddings as dense vectors to the features of your NER classifier for each word feature (current word, previous word, next word) - either in Spanish or in Dutch.
  2. Retrain the model and report on performance. Comment.



Last modified 15 Dec, 2020