Consider the dataset on Question Classification available here.
Read the article introducing this dataset: in Li, Dan Roth, Learning Question Classifiers. COLING'02.
Write a half to one-page summary of the paper, focusing on the dataset description (more than on the description of the classifier introduced in the paper). Describe the exact task, the labels used, and provide the motivation for this task. Provide examples for the 6 main categories.import codecs
import math
import random
import string
import time
import numpy as np
from sklearn.metrics import accuracy_score
'''
Define different constants for the task of question classification
based on the definition of the task.
In the question classification case, there are 2 labels per question: coarse and fine.
'''
coarse_categories = ["ABBREVIATION", "ENTITY", "DESCRIPTION", "HUMAN", "LOCATION", "NUMERIC VALUE"]
fine_categories = {}
fine_categories["ABBREVIATION"] = ["abb", "exp"]
# @Todo more here...
# Build the category_lines dictionary, a list of names per language
coarse_category_lines = {}
all_categories = []
# @Todo: Define the way the lines should be parsed
def parseLine(line):
return line
# @Todo: Read a file and split into lines - create the appropriate data structure
def readLines(filename):
lines = codecs.open(filename, "r",encoding='utf-8', errors='ignore').read().strip().split('\n')
return [parseLine(line) for line in lines]
The labels used to classify the questions are organized in two levels:
The definition of the question labels is provided here.
Provide a quantitative description of the dataset:
For this type of exploration, the pandas library is extremely convenient. In particular, explore the function dataframe.describe(). You can use other code if you prefer.
Define the Python interface (functions or class according to your preference) of a question classifier so that the function accuracy_score and classification_report from the sklearn.metrics module can be used.
Define a function evaluate_classifier that takes a trained classifier and reports classification results for coarse and fine categories.
Define a function confusion_matrix(model) which prints a confusion matrix for the coarse level categories in the same way as in HW1 Question 3.
Implement a baseline classifier for the 6 coarse labels using the heuristics described in the paper in Section 2.1 (of the form – If a query starts with Who or Whom: type Human).
Report on the accuracy, precision, recall, and F1 measure for all the coarse labels, and provide the confusion matrix for the 6 coarse labels.
Analyze the errors by listing types of errors (false positives and false negatives for each of the 6 labels).
Implement a feature-based classifier for the 6 coarse labels using the types of features described in the paper Section 3.2: words, POS tags, NER tags.
Use the spacy library to perform pre-processing of the questions - including POS tagging and Named Entity Recognition and Noun Chunks detection. Spacy comes with excellent pre-trained models for English and other languages. Installing Spacy requires the following steps (see spacy documentation):
# This installs the Spacy library (13MB)
!pip install spacy
# This downloads pre-trained models for POS tagging / NER / Noun chunks in English (34MB)
!python -m spacy download en_core_web_sm
Requirement already satisfied: spacy in c:\users\michael\.conda\envs\nlp21\lib\site-packages (2.3.4) Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (4.49.0) Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (1.0.0) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (1.0.5) Requirement already satisfied: thinc<7.5.0,>=7.4.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (7.4.4) Requirement already satisfied: setuptools in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (49.6.0.post20201009) Requirement already satisfied: srsly<1.1.0,>=1.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (1.0.5) Requirement already satisfied: plac<1.2.0,>=0.9.6 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (0.9.6) Requirement already satisfied: blis<0.8.0,>=0.4.0; python_version >= "3.6" in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (0.7.4) Requirement already satisfied: requests<3.0.0,>=2.13.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (2.24.0) Requirement already satisfied: numpy>=1.15.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (1.19.4) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (2.0.5) Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (0.8.0) Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy) (3.0.5) Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy) (2.10) Requirement already satisfied: certifi>=2017.4.17 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy) (2020.12.5) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy) (1.25.11) Requirement already satisfied: en_core_web_sm==2.3.1 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm==2.3.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (2.3.1) Requirement already satisfied: spacy<2.4.0,>=2.3.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from en_core_web_sm==2.3.1) (2.3.4) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.0.5) Requirement already satisfied: thinc<7.5.0,>=7.4.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (7.4.4) Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (4.49.0) Requirement already satisfied: plac<1.2.0,>=0.9.6 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.9.6) Requirement already satisfied: blis<0.8.0,>=0.4.0; python_version >= "3.6" in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.7.4) Requirement already satisfied: numpy>=1.15.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.19.4) Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.8.0) Requirement already satisfied: requests<3.0.0,>=2.13.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.24.0) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.5) Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.5) Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.0) Requirement already satisfied: setuptools in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (49.6.0.post20201009) Requirement already satisfied: srsly<1.1.0,>=1.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.5) Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.10) Requirement already satisfied: certifi>=2017.4.17 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2020.12.5) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.25.11) [+] Download and installation successful You can now load the model via spacy.load('en_core_web_sm')
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('Apple is looking at buying U.K. startup for $1 billion')
print(doc.ents)
print(doc.ents[0].label_)
(Apple, U.K., $1 billion) ORG
Invoking the 'nlp()' function of spacy performs a set of analyses on the text, including: sentence separation, tokenization, lemmatization, parts of speech tagging, Noun-phrase chunking, named entity recognition and syntactic parsing. Information about these analyses is retrieved using the spacy document properties.
As indicated in the paper, we want to extract the following information as features for the task of question classification:
Here are starting points to learn how to extract this information from the nlp analysis:
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('Apple is looking at buying U.K. startup for $1 billion')
# Token level features retrieved by Spacy: token, lemma, POS
for x in doc: # Each x is a Token
print(f"Token: {x} - Lemma: {x.lemma_} - POS: {x.pos_}")
Token: Apple - Lemma: Apple - POS: PROPN Token: is - Lemma: be - POS: AUX Token: looking - Lemma: look - POS: VERB Token: at - Lemma: at - POS: ADP Token: buying - Lemma: buy - POS: VERB Token: U.K. - Lemma: U.K. - POS: PROPN Token: startup - Lemma: startup - POS: NOUN Token: for - Lemma: for - POS: ADP Token: $ - Lemma: $ - POS: SYM Token: 1 - Lemma: 1 - POS: NUM Token: billion - Lemma: billion - POS: NUM
# Span level features retrieved by Spacy: named entities, start (0-based index), end (index just after the span), category
print(doc.ents)
for e in doc.ents:
print(f"{e} - {e.start} - {e.end} - {e.label_}")
(Apple, U.K., $1 billion) Apple - 0 - 1 - ORG U.K. - 5 - 6 - GPE $1 billion - 8 - 11 - MONEY
# Span level features retrieved by Spacy: noun chunks
print(list(doc.noun_chunks))
for c in doc.noun_chunks:
print(f"{c.start} - {c.end} - {c.root}")
[Apple, U.K. startup] 0 - 1 - Apple 5 - 7 - startup
The paper does not explicitly indicate how to encode the features it lists and is not precise about the features named related words
(words which are usually associated with a specific type of questions). For example:
Discuss a priori what are good ways to encode these features (lemma, POS, NER, chunk, related words) - provide examples that explain your intuition.
Implement a feature extraction function that turns a question into a feature vector appropriate for the scikit-learn classifiers. Adopt the example shown in the scikit-learn documentation: loading features from dicts.Train scikit-learn based classifiers for:
For each of the three classifiers, report:
You should experiment with different classifiers from those illustrated in the Classification of text documents using sparse features example.
1.7.1 Analyze which of the features are most helpful for this task among lemma, POS, NER, Chunks and Related Words. (This analysis is called ablation analysis).
1.7.2 The dataset is quite small (5,500 questions in the training dataset for 50 labels). How would you determine whether your model overfits on this data?
Execute the notebook tutorial of Scikit-Learn on text classification: out of core classification.
Explore how many documents are in the dataset, how many categories, how many documents per categories, provide mean and standard deviation, min and max. (use the pandas library to explore the dataset, use the dataframe.describe() method.)
Explore how many characters and words are present in the documents of the dataset.
The Kaggle BBC News dataset is a document dataset to test document classification. It contains 1,500 training documents (news stories from the BBC News) and 700 test documents. Documents are classified into 5 categories: sports, tech, business, entertainment, politics. Text is encoded in the following format: all lower case, quotes are removed and separated, non period punctuations are removed. For example:
lifestyle governs mobile choice faster better or funkier hardware alone is not going to help phone firms sell more handsets research suggests. instead phone firms keen to get more out of their customers should not just be pushing the technology for its own sake. consumers are far more interested in how handsets fit in with their lifestyle than they are in screen size onboard memory or the chip inside shows an in-depth study by handset maker ericsson. historically in the industry there has been too much focus on using technology said dr michael bjorn senior advisor on mobile media at ericsson s consumer and enterprise lab.
Download the data bbcnews.zip and place it in ../data.
Implement a classifier for this dataset.
Report performance, confusion matrix and analyze errors.In order to run the test data, you will need to register to Kaggle and use their submission system. To avoid the complexity of using the Kaggle submission system, split the train data into 80% training / 20% test.
You can see examples solving this task with good usage of scikit-learn APIs in the Kaggle leaderboard. In particular, aryan-bbc-news-classification demonstrates data exploration for classification using pandas, tf-idf features, TSNE visualization for feature vectors, and chi-square correlation between features and labels.
The task of Named Entity Recognition (NER) involves the recognition of names of persons, locations, organizations, dates in free text. As we have seen above, Spacy includes a very good NER model as part of its library. In this question, we will study how to implement such a model.
The following sentence is tagged with sub-sequences indicating PER (for persons), LOC (for location) and ORG (for organization):
Wolff, currently a journalist in Argentina, played with Del Bosque in the final years of the seventies in Real Madrid. [PER Wolff ] , currently a journalist in [LOC Argentina ] , played with [PER Del Bosque ] in the final years of the seventies in [ORG Real Madrid ] .
NER involves 2 sub-tasks: identifying the boundaries of such expressions (the open and close brackets) and labelling the expressions (with tags such as PER, LOC or ORG). This sequence labelling task is reduced into a classification task, using the BIO encoding of the data:
Wolff B-PER , O currently O a O journalist O in O Argentina B-LOC , O played O with O Del B-PER Bosque I-PER in O the O final O years O of O the O seventies O in O Real B-ORG Madrid I-ORG . O
The dataset we will use for this question is derived from the CoNLL 2002 shared task - which is about NER in Spanish and Dutch. The dataset is included in the NLTK distribution. Explanations on the dataset are provided in the CoNLL 2002 page.
To access the data in Python, do:
from nltk.corpus import conll2002
etr = conll2002.chunked_sents('esp.train') # In Spanish
eta = conll2002.chunked_sents('esp.testa') # In Spanish
etb = conll2002.chunked_sents('esp.testb') # In Spanish
dtr = conll2002.chunked_sents('ned.train') # In Dutch
dta = conll2002.chunked_sents('ned.testa') # In Dutch
dtb = conll2002.chunked_sents('ned.testb') # In Dutch
The data consists of three files per language (Spanish and Dutch): one training file and two test files testa and testb. The first test file is to be used in the development phase for finding good parameters for the learning system. The second test file will be used for the final evaluation.
Your task consists of:
For example, given the following toy training data, the encoding of the features would be:
Wolff NP B-PER , , O currently RB O a AT O journalist NN O in IN O Argentina NP B-LOC , , O played VBD O with IN O Del NP B-PER Bosque NP I-PER in IN O the AT O final JJ O years NNS O of IN O the AT O seventies NNS O in IN O Real NP B-ORG Madrid NP I-ORG . . O Classes 1 B-PER 2 I-PER 3 B-LOC 4 I-LOC 5 B-ORG 6 I-ORG 7 O Feature WORD-FORM: 1 Wolff 2 , 3 currently 4 a 5 journalist 6 in 7 Argentina 8 played 9 with 10 Del 11 Bosque 12 the 13 final 14 years 15 of 16 seventies 17 Real 18 Madrid 19 . Feature POS 20 NP 21 , 22 RB 23 AT 24 NN 25 VBD 26 JJ 27 NNS 28 . Feature ORT 29 number 30 contains-digit 31 contains-hyphen 32 capitalized 33 all-capitals 34 URL 35 punctuation 36 regular Feature Prefix1 37 W 38 , 39 c 40 a 41 j 42 i 43 A 44 p 45 w 46 D 47 B 48 t 49 f 50 y 51 o 52 s 53 .
Given this encoding, we can compute the vector representing the first word "Wolff NP B-PER" as:
# Class: B-PER=1 # Word-form: Wolff=1 # POS: NP=20 # ORT: Capitalized=32 # prefix1: W=37 1 1:1 20:1 32:1 37:1
When you encode the test dataset, some of the word-forms will be unknown (not seen in the training dataset). You should, therefore, plan for a special value for each feature of type "unknown" when this is expected.
Instead of writing the code as explained above, use the Scikit-learn vectorizer and pipeline library. General information on feature extraction for text data in Scikit-Learn is in the Scikit-Learn documentation. Refer to the DictVectorizer for this specific task. Hashing vs. DictVectorizer also provides useful background.
Start from the following example notebook CoNLL 2002 Classification with CRF. You do not need to install Python-CRFSuite - just take this notebook as a starting point to explore the dataset and ways to encode features. (This notebook also gives you an indication of the level of result you can expect to obtain.)
We implemented above a version of NER which is based on greedy tagging: that is, without optimizing the sequence of tags
as we would obtain by training an HMM or CRF model.
In particular, we did not check that the BIO tags produced by the tagger is a legal sequence.
Write code to identify sequences of BIO tags which are illegal and report on the frequency of this problem for each type
of illegal tags transition (O-IX, IX-IY, BX-IY). Comment on your observations.
One way to improve a greedy tagger for NER is to use Word Embeddings as features. A convenient package to manipulate Word2Vec word embeddings is provided in the gensim package by Radim Rehurek. To install it, use:
!pip install gensim
Requirement already satisfied: gensim in c:\users\michael\.conda\envs\nlp21\lib\site-packages (3.8.3) Requirement already satisfied: scipy>=0.18.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from gensim) (1.5.3) Requirement already satisfied: six>=1.5.0 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from gensim) (1.15.0) Requirement already satisfied: numpy>=1.11.3 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from gensim) (1.19.4) Requirement already satisfied: smart-open>=1.8.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from gensim) (3.0.0) Requirement already satisfied: requests in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from smart-open>=1.8.1->gensim) (2.24.0) Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests->smart-open>=1.8.1->gensim) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests->smart-open>=1.8.1->gensim) (2020.12.5) Requirement already satisfied: idna<3,>=2.5 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests->smart-open>=1.8.1->gensim) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\michael\.conda\envs\nlp21\lib\site-packages (from requests->smart-open>=1.8.1->gensim) (1.25.11)
You must also download a pre-trained Word2Vec or fastText word embedding model. The models must naturally be in Spanish or Dutch. (Only test word embeddings for one language.) You can find pre-trained word embedding models in different formats:
Specific information on manipulating word vectors with Gensim is provided in Gensim with the KeyedVector. Practical examples are available for Spanish in this notebook. (Pay attention that word embeddings large are pretty big files - about 3GB when uncompressed.)
Your task: