Michael Elhadad

- 30 Oct 16: Welcome to NLP 17!
- 16 Nov 16: Added reading links in Deep Learning Intro
- 20 Nov 16: Extension hour today 12-14 as usual in room 34:116, 14-15 in room 34:205
- 21 Nov 16: Updated POS Tagging notes with Python 3 / Universal Tagset / Available as a Jupyter Notebook
- 03 Dec 16: HW1 is published.
- 14 Dec 16: Quick and dirty way to add a method to a class without editing the source code (class patching):
Assume you have a library which defines a class:
class A(object): def __init__(self, x): self.x = x def m(self): print(self.x)

You are told to add a method to this interface (but you cannot edit the library source code). At runtime - do this:def n(self): print(self.x) A.n = n a = A(2) a.n()

- 01 Jan 17: New notebooks on text classification using SKlearn.
- 08 Jan 17: Notebook on language models trained using n-grams and RNNs - Heavy Metal Lyrics generator.
- 16 Jan 17: HW2 is published.. It covers the announced "project" - that is HW2 is a merge of HW2 and Project.
- 22 Jan 17: Registration for HW1 Grading is open. Choose a slot and send me email. First-come first-serve.
- 22 Jan 17: A sample notebook to illustrate Pandas usage on spam email classification (ipynb code).
- 05 Mar 17: Registration for HW2 Grading is open. Choose a slot and send me email. First-come first-serve.

- General Intro to NLP - Linguistic Concepts
- How to Write a Spelling Corrector by Peter Norvig - learning from data, noisy channel model
- Deep Learning Intro
- Parts of Speech Tagging
- Basic Statistic Concepts
- Classification
- Sequence Classification
- Deep Learning for NLP
- Word Embeddings
- Syntax and Parsing
- Summarization
- Topic Modeling

- Acquire basic understanding of linguistic concepts and natural language complexity: variability (the possibility to express the same meaning in many different ways) and ambiguity (the fact that a single expression can refer to many different meanings in different contexts); levels of linguistic description (word, sentence, text; morphology, syntax, semantics, pragmatics). Schools of linguistic analysis (functional, distributional, Chomskyan); Empirical methods in Linguistics; Lexical semantics; Syntactic description; Natural language semantics issues.
- Acquire basic understanding of machine learning techniques as applied to text: supervised vs. unsupervised methods; training vs. testing; classification; regression; distributions, KL-divergence; Bayesian methods; Support Vector Machines; Perceptron; Deep Learning methods in NLP; RNNs and LSTMs.
- Natural language processing techniques: word and sentence tokenization; parts of speech tagging; lemmatization and morphological analysis; chunking; named entity recognition; language models; probabilistic context free grammars; probabilistic dependency grammars; parsing accuracy metrics; treebank analysis; text simplification; paraphrase detection; summarization; text generation.

- Descriptive linguistic models
- Language Models -- Statistical Models of Unseen Data (n-gram, smoothing, recurrent neural networks language models)
- Language Models and deep learning -- word embeddings, continuous representations, neural networks
- Parts of speech tagging, morphology, non categorical phenomena in tagging
- Information Extraction / Named Entity Recognition
- Using Machine Learning Tools: Classification, Sequence Labeling / Supervised Methods / SVM. CRF, Perceptron
- Bayesian Statistics, generative models, topic models, LDA
- Syntactic descriptions: Parsing sentence, why, how, PCFGs, Dependency Parsing
- Text Summarization

- 30 Oct 16:
**General Intro to NLP - Linguistic Concepts****Things to do:**- Find a way to estimate how many words exist in English. In Hebrew. What method did you use? What definition of word did you use? (Think derivation vs. inflection)
- Experiment with Google Translate: find ways to make Google Translate "fail dramatically" (generate very wrong translations). Explain your method and collect your observations. Document attempts you made that did NOT make Google Translate fail. (Think variability and ambiguity; Think syntactic complexity; think lexical sparsity, unknown words).
- Think of reasons why natural languages have evolved to become ambiguous (Think: what is the communicative function of language; who pays the cost for linguistic complexity and who benefits from it; is ambiguity created willingly or unconsciously?)

- 06 Nov 16:
**Peter Norvig: How to Write a Spelling Corrector (2007)**.This is a toy spelling corrector illustrating the statistical NLP method (probability theory, dealing with large collections of text, learning language models, evaluation methods). Read an extended version of the material with more applications (word segmentation, n-grams, smoothing, more on bag of words, secret code decipher): How to Do Things with Words.

**Things to do:**- Read about Probability axioms.
- Read about Edit Distance and in more details, a review of minimum edit distance algorithms using dynamic programming from Dan Jurafsky.
- Install Python: I recommend installing the Anaconda distribution (choose the Python 3.5 version).
(Note: Many of the code samples you will see are written in Python 2 - which is not exactly compatible with Python 3 - the main annoying difference is that in Python 2
you can write: print x -- in Python 3 it must be print(x).]
The Anaconda distribution includes a large set of Python packages ready to use that we will find useful. (391MB download, 2GB disk space needed.) In particular, Anaconda includes the nltk, pandas, numpy, scipy and scikit-learn packages.

- Execute Norvig's spell checker for English (you will neeed spell.py and the large file of text used for training big.txt).
- How many tokens are there in big.txt? How many distinct tokens? What are the 10 most frequent words in big.txt?
- In a very large corpus (discussed in the ngram piece quoted below), the following data is reported:
The 10 most common types cover almost 1/3 of the tokens, the top 1,000 cover just over 2/3.

What do you observe on the much smaller big.txt corpus? - You can read more from Norvig's piece on ngrams.
- Execute the word segmentation example from Norvig's ngram chapter (code in ngrams.py).
Note the very useful definition of the @memo decorator in this example, which is an excellent method to implement dynamic programming algorithms in Python. From Python Syntax and Semantics:

A Python decorator is any callable Python object that is used to modify a function, method or class definition. A decorator is passed the original object being defined and returns a modified object, which is then bound to the name in the definition. Python decorators were inspired in part by Java annotations, and have a similar syntax; the decorator syntax is pure syntactic sugar, using @ as the keyword:

@viking_chorus def menu_item(): print("spam")

is equivalent to:def menu_item(): print("spam") menu_item = viking_chorus(menu_item)

- This corpus includes a list of about 40,000 pairs of words (error, correction). It is too small to train a direct spell checker that would map word to word. Propose a way to learn a useful error model (better than the one used in Norvig's code) using this corpus. Hint: look at the model of weighted edit distance presented in Jurafsky's lecture cited above.

- 13 Nov 2016:
**Deep Learning Intro****Things to do:**- Read the 5 parts of the series "Machine Learning is Fun" from Adam Geitgey (about 15 min each part):
- Part 1: The world's easiest introduction to Machine Learning
- Part 2: Using Machine Learning to generate Super Mario Maker levels
- Part 3: Deep Learning and Convolutional Networks
- Part 4: Modern Face Recognition with Deep Learning
- Part 5: Language Translation with Deep Learning and the Magic of Sequences

- Learn Python (About 4 hours)
- Python Tutorial (Use Python 3.5)
- Google intro to Python (this uses Python 2)

- Install a good Python environment (About 3 hours)
The default environment is pyCharm (the free Community Edition is good for our needs 127MB download).
Hackers may want to get deeper in mastering Python's environment: Python's ecosystem.

- Learn Tensorflow (First pass about 4 hours)

- Read the 5 parts of the series "Machine Learning is Fun" from Adam Geitgey (about 15 min each part):
- 20 Nov 2016:
**Parts of Speech Tagging****Things to do:**- Read about the Universal Parts of Speech Tagset (About 2 hours)
- Install NLTK: if you have installed Anaconda, it is already installed.
Make sure to download the corpora included with nltk.
Q: How do you find out where your package is installed after you use easy_install?

A: in the Python shell, type: import nltk; then type: nltk. You will get an answer like:

>>> import nltk >>> nltk <module 'nltk' from 'C:\Anaconda\lib\site-packages\nltk\__init__.py'> >>>

- Explore the Brown corpus of parts-of-speech tagged English text using NLTK's corpus reader and FreqDist object: Use the Universal tagset for all work (About 1 hour)
- What are the 10 most common words in the corpus?
- What are the 5 most common tags in the corpus?

- Read Chapter 5 of the NLTK book (About 3 hours)
- Advanced topics in POS tagging: we will get back to the task of POS tagging with different methods in the following chapters, for more advanced sequenced labeling methods (HMM), Deep Learning based methods using Recurrent Neural Networks, feature-based classifier methods for tagging (CRF), and as a test case for unsupervised EM techniques and Bayesian techniques. You can look at the source code of the nltk.tag module
for a feeling of how the tag.hmm, tag.crf and tag.tnt methods are implemented.
The following papers give a good feeling of the current state of the art in POS tagging:

- Learning Character-level Representations for Part-of-Speech Tagging, by Dos Santos and Zadrozny, ICML 2014: uses a character-level Convolution Network to perform POS tagging; reaches accuracy of 97.32% and remarkably about 90% on unknown words (words never seen during training).
- Understanding Convolutional Neural Networks for NLP, this is a blog article with high quality Python code and notebooks explaining and implementing Dos Santos and Zadrozny's model of POS tagging using character-level CNN.
- A Universal Part-of-Speech Tagset by Slav Petrov, Dipanjan Das and Ryan McDonald, LREC, 2012.

- Read A good POS tagger in 200 lines of Python, an Averaged Perceptron implementation with good features, fast, reaches 97% accuracy (by Matthew Honnibal).

**20 Nov 2016 - 11 Dec 2016: Basic Statistic Concepts / Supervised Machine Learning****Things to do:**- Bayesian concept learning from Tenenbaum 1999 - reported in Murphy 2012 Chapter 3.
- Read Deep Learning by Goodfellow, Bengio and Courville, 2016 Chapters 3 and 5. (About 5 hours)
- Watch the 15mn video (ML 7.1) Bayesian inference - A simple example by Mathematical Monk.
- Make sure you have installed numpy and scipy in your Python environment. Easiest way is to use the Anaconda distribution.
- Read Introduction to statistical data analysis in Python – frequentist and Bayesian methods from Cyril Rossant, and execute the associated Jupyter notebooks. (About 4 hours)
- Learn how to use Scipy and Numpy - Chapter 1 in Scipy Lectures (About 5 hours)
- Write Python code using numpy, scipy and matplotlib.pyplot to draw the graphs of the Beta distribution that appear in the lecture notes (About 1 hour)
- Given a dataset for a Bernouilli distribution (that is, a list of N bits), generate a sequence of N graphs illustrating the sequential update process, starting from a uniform prior until the Nth posterior distribution. Each graph indicates the distribution over μ, the parameter of the Bernouilli distribution (which takes value in the [0..1] range). (About 2 hours)
- Learn how to draw Dirichlet samples using
numpy.random.mtrand.dirichlet. A sample from a Dirichlet distribution is a multinomial distribution.
Understand the example from the Wikipedia article on Dirichlet distributions about string cutting:
import numpy as np import matplotlib.pyplot as plt s = np.random.dirichlet((10, 5, 3), 20).transpose() plt.barh(range(20), s[0]) plt.barh(range(20), s[1], left=s[0], color='g') plt.barh(range(20), s[2], left=s[0]+s[1], color='r') plt.title("Lengths of Strings") plt.show()

(About 2 hours) - Compute the MLE estimator μ
_{MLE}of a binomial distribution Bin(m|N, μ). **Mixture Priors**: assume we contemplate two possible modes for the value of our Beta-Binomial model parameter μ. A flexible method to encode this belief is to consider that our prior over the value of μ has the form:μ ~ k

A prior over μ of this form is called a_{1}Beta(a, b) + k_{2}Beta(c, d) where k_{1}+ k_{2}= 1 m ~ Bin(μ N)*mixture prior*- as it is a linear combination of simple priors.- Prove that the mixture prior is a proper probabilistic distribution.
- Compute the posterior density over μ for a dataset where (N = 10, m=8, N-m=2) where k
_{1}=0.8 and k_{2}=0.2 and the prior distributions are Beta(1,10) and Beta(10,1). Write Python code to draw the prior density of μ and its posterior density. (About 2 hours) - Experiment with a very simple form of Stochastic Gradient Descent (SGD) with a custom loss function by running this notebook. (Notebook source here). More examples available on the Autograd project homepage.

**18 Dec 16 Classification**- Read Chapter 6: Learning to Classify Text of the NLTK Book (About 3 hours).
- Read Generative and Discriminative Classifiers: Naive Bayes and Logistic Regression, Tom Mitchell, 2015. (About 3 hours)
- Watch (ML 8.1) Naive Bayes Classification a 15mn video on Naive Bayes Classification by Mathematical Monk and the following chapter (ML 8.3) about Bayesian Naive Bayes (20 minutes).
- Read and execute the tutorial on Using Theano for Logistic Regression

- Explore the documentation of the nltk.classify module.
- Read the code of the NLTK Naive Bayes classifier and run nltk.classify.naivebayes.demo()
- Read the code of the NLTK classifier demos: names_demo and wsd_demo.
- Read the documentation on feature extraction in Scikit-learn.
- Run the example on document classification in Scikit-learn: Notebook (ipynb source).
- Experiment with the example of classifications in this iPython notebook (code) which shows how to run NLTK classifiers in a variety of ways.
- Experiment with the Reuters Dataset notebook (code) illustrating document classification with bag of words features and TF-IDF transformation.
- The Theano tutorial on Logistic Regression is applied to a vision task (MNIST hand-written digit recognition). Apply the Theano classes to the task of Text Classification using the same dataset as Scikit-learn tutorial on text classification on the 20 newsgroup dataset.
- (Advanced) From Logistic regression to deep nets is a step by step notebook illustrating how to modify a Logistic Regression classifier into a deep net with good explanation of regularization, SGD, and back-propagation. It uses a simplified dataset of the MNIST digits dataset called the "small digits dataset". Apply the same code to the task of text classification on the Reuters or 20Newsgroup datasets.

**25 Dec 16 - 08 Jan 17 Sequence Classification**- Read Michael Collin's notes on Language Modeling: Markov models for fixed length sequences, for variable length sequences, trigram language models, MLE estimates, perplexity over n-gram models, smoothing of n-gram estimates with linear interpolation.
- Read Michael Collin's nodes on Tagging Problems, and Hidden Markov Models: POS tagging and Named Entity Recognition as tagging problems (with BIO tag encoding), generative and noisy channel models, generative tagging models, trigram HMM, conditional independence assumptions in HMMs, estimating the parameters of an HMM, decoding HMMs with the Viterbi algorithm.

**Things to do:**- Implement the bigram and trigram language model described in Language Modeling
with the
*discounting method*described in 1.4.2 in Python. Test it on the nltk.corpus.gutenberg dataset split as training, development and test of 80%, 10% and 10%.Optimize the value of the β parameter of the method on the development set. Compare the perplexity of the bigram and trigram models on the test dataset (as defined in 1.3.3).

- Explain why the problem of decoding (see 2.5.4 in Tagging Problems, and Hidden Markov Models) requires a dynamic programming algorithm (Viterbi) while we did not need such a decoding step in the previous chapter when we discussed Logistic Regression and Naive Bayes?
- Implement Algorithm 2.5 (Viterbi with backpointers) from Tagging Problems, and Hidden Markov Models in Python. Test it on the Brown POS tagging dataset using MLE for tag transitions estimation (parameters q) and a discounting language model for each tag in the Universal taget for parameters e(x|tag) for each tag.
- Do the Assignment 3 from Richard Johannson's course on Machine Learning for NLP, 2014.
Read the material assignment 3 material and
Lecture 6: predicting structured objects.
Start from the excellent Python implementation of the structured perceptron algorithm.

**08 Jan 17 Deep Learning for NLP**- Read A Primer on Neural Network Models for Natural Language by Yoav Goldberg, Oct 2015 up to Page 35.
- Natural Language Understanding with Distributed Representation by Kyunghyun Cho, Nov 2015 - Chapters 1 to 3.
- Deep Learning by Goodfellow, Bengio and Courville, 2016 Chapters 6, 7, 8 and 10.
- The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy, May 2015 and the analysis of the same data The unreasonable effectiveness of Character-level Language Models (and why RNNs are still cool) by Yoav Goldberg, June 2015
- Calculus on Computational Graphs: Backpropagation, by Chris Olah, Aug 2015.
- Understanding LSTM Networks, by Chris Olah, Aug 2015.
- WildML articles by Denny Britz - Sep 2015 - Jan 2016
These include tutorials and Python notebooks of incremental complexity covering topics in Deep learning for NLP.

- Implementing a Neural Network from Scratch in Python - an Introduction
- Speeding up your Neural Network with Theano and the GPU
- Recurrent Neural Networks Tutorial, Part 1 - Introduction to RNNs
- Recurrent Neural Networks Tutorial, Part 2 - Implementing a RNN with Python, Numpy and Theano
- Recurrent Neural Networks Tutorial, Part 3 - Backpropagation Through Time and Vanishing Gradients
- Recurrent Neural Network Tutorial, Part 4 - Implementing a GRU/LSTM RNN with Python and Theano
- Understanding Convolutional Neural Networks for NLP
- Implementing a CNN for Text Classification in TensorFlow
- Attention and Memory in Deep Learning and NLP

- Heavy Metal and Natural Language Processing - Part 2 Iain Barr, Sept 2016, experiments with Language Models - ngrams and RNNs - to generate Deep Metal lyrics. Demo on deepmetal.io. Good intro material on language models, examples with char-models and word-models - starts with n-grams and smoothing, then RNN using Keras. Implementation - including notebook and pre-trained models..

**15 Jan 17 Word Embeddings**- Read CS 224D: Deep Learning for NLP1 1 - Lecture Notes: Part I by Richard Socher, 2015 and the links of the first chapter in Deep Learning for NLP course.
- Read Efficient Estimation of Word Representations in Vector Space from Mikolov et al (2013) and the Word2vec site.
- Read TensorFlow's tutorial on Word Embeddings.
- Read Word2Vec Explained by Yoav Goldberg and Omer Levy, 2014.
- Read Neural Word Embedding as Implicit Matrix Factorization by Levy and Goldberg, 2014.
- Read Improving Distributional Similarity with Lessons Learned from Word Embeddings by Levy, Goldberg and Dagan, 2015.
- Read Linguistic Regularities in Continuous Space Word Representations by Mikolov et al, 2013.
- Play with Wevi - a word embedding visual inspector in Javascript. Illustrates how words are mapped to vectors visually.
- Read king - man + woman is queen; but why? by Piotr Migdal, Jan 2017. Good summary of word embeddings with interactive visualization tools, including word2viz word analogies explorer.

**Things to do:**- Install Gensim in your environment (run "conda install gensim") and run the Gensim Word2vec tutorial.
- Experiment with Sense2vec with spaCy and Gensim, Source code a tool to compute word embeddings taking into account multi-word expressions and POS tags.
- Register to the Udacity Deep Learning course (free) by Vincent Vanhoucke, and study the chapter "Deep Models for Text and Sequences", then do Assignment 5 (ipynb notebook) "Train a Word2Vec skip-gram model over Text8 data".
- Continue with Assignment 6
(a
ipynb notebook) "Train a LSTM character model over Text8 data". - Experiment with the Kaggle competition for using Google's word2vec package for sentiment analysis.

**22 Jan 17 Syntax and Parsing**- Context Free Grammars Parsing
- Probabilistic Context Free Grammars Parsing
- Michael Collins's lecture on CFGs and CKY
- Michael Collins's lecture on Lexicalized PCFGs:
- Why CFGs are not adequate for describing treebanks: lack of sensitivity to lexical items + lack of sensitivity to structural preferences.
- How to lexicalize CFGs with Head propagation.
- How to parse a lexicalized PCFG.

- NLTK tools for PCFG parsing
- Notes on computing KL-divergence
- Dependency Parsing:
- Dependency Parsing by Graham Neubig. Graham's teaching page with github page for exercises.
- Dependency Parsing: Past, Present, and Future, Chen, Li and Zhang, 2014 (Coling 2014 tutorial)
- NLTK Dependency Parsing Howto
- Parsing English with 500 lines of Python, an implementation by Matthew Honnibal of Training Deterministic Parsers with Non-Deterministic Oracles, Yoav Goldberg and Joakim Nivre, TACL 2013. (Complete Python code)
- Neural Network Dependency Parser, Chen and Manning 2014. A Java implementation of a Neural Network Dependency Parser with Unlabelled accuracy of 92% and Labelled accuracy of 90%.

**Later - Summarization**- Automatic Text Summarization
- A Survey of Text Summarization Techniques, Ani Nenkova and Kathleen McKeown, Mining Text Data, 2012 - Springer

**Later - Topic Modeling and Latent Dirichlet Allocation**- David Blei's Lecture on LDA Sept 2009, Part 1 (1h30) and Part 2 (1h30)
- Slides of Blei's lecture

- NLTK:
Nltk is a Python based toolkit with wide coverage of NLP techniques - both statistical and knowledge-based.
- Dynet - a Python / C++ library for Deep Learning.
- Theano - a Python library for Deep Learning.
- Torch - a LUA library for Deep Learning.
- TensorFlow - a Python library for Deep Learning.
- Keras - a high-level Python library on top of Tensorflow or Theano for Deep Learning.
- Scikit-learn - a Python library for Machine Learning. Presents a uniform interface for many ML tasks (fit, transform). Good text processing example ( Working with text documents).

- Notebooks
- Practical Data Science in Python by Radim Rehurek. This is an iPython notebook demonstrating how to write classifiers in Python (using ScikitLearn. The concrete example is a spam detector on SMS messages.
- Out-of-core classification of text documents, scikit-learn example showing how to perform document classification on the Reuters-21578 database.
- Sample pipeline for text feature extraction and evaluation, performs feature selection and compares performance on document classification on the 20 newsgroup dataset.
- The Travelling Salesperson Problem, this notebook demonstrates a sequence of exact and approximate algorithms to solve the Travelling Salesperson Problem. It is a great introduction to Python programming.
- Cooking with Pandas by Julia Evans (2013), introduction to the Pandas Python library to manipulate data with aggregations and queries. The updated Git repository is Panda Cookbook.
- Analyzing a Twitter Dataset with Pandas, by Gregory Saxton (2015).
- SciPy 2015 SkLearn Tutorial by Andreas Muller. Comprehensive tutorial to Machine Learning using ScikitLearn with relevant examples on Text analysis.
- Language Model GRU with Python and Theano, Part 4 of the WildML RNN Tutorial

- Online Courses
- Natural Language Processing, by Jason Eisner, John Hopkins University, 2014
- Statistical Methods for NLP, by Joakim Nivre, Uppsala, 2012
- Chris Manning course (Stanford): NLP (2011)
- Julia Hockenmaier course: Advanced NLP: Theory and applications of Bayesian models
- Marti Hearst course: Applied NLP (Berkeley, 2006)

- Tutorials
- A Statistical MT Tutorial Workbook, Kevin Knight, 1999
- Bayesian Inference with Tears, Kevin Knight, 2009
- Structured Prediction for Natural Language Processing, Noah Smith, ICML 2009
- Classification for NLP, Dan Klein, ACL 2007
- Structured Bayesian Nonparametric Models with Variational Inference, Dan Klein and Percy Liang, ACL 2007
- Gibbs Sampling for the Uninitiated, Philip Resnik and Eric Hardisty, June 2010

- Books:
- Introduction to Natural Language Processing, by Steven Bird, Ewan Klein and Edward Loper, 2009, distributed on the NLTK site.
- Pattern Recognition and Machine Learning, Chris Bishop, 2007.
- Information Theory, Inference, and Learning Algorithms, David J.C. MacKay, 2003.
- The LingPipe Java library suite - by Bob Carpenter. Information extraction and data mining tools.
- Text Analysis with LingPipe, Bob Carpenter, 2011. A practical book on using Lingpipe. Covers strings, streams, regular expressions, corpora readers, tokenization, language models, classifiers and Latent Dirichlet Allocation.

- Python
- Google's intro to Python
- Python Ecosystem: notes on installing Python in your environment
- Introduction to Python Generators
- Scikit Learn: Machine Learning in Python

- Extracting text from HTML:
- The JusText Python package does a very good job of cleaning up HTML by removing boilerplate HTML code around "interesting text".
- Decruft is a Python implementation of the Readability algorithm - easy to use, works well.
- Readability browser bookmarklet Readability is a browser bookmarklet that wipes out all that junk so you can have a more enjoyable reading experience. It works with all the latest browsers and its success rate is pretty respectable (we'd guess over 90% of web sites are handled properly). It is implemented in Jscript and relies on comparing the HTML link density of HTML elements: clean text have less density than junk.
- CleanEval homepage: CLEANEVAL is a shared task and competitive evaluation on the topic of cleaning arbitrary web pages, with the goal of preparing web data for use as a corpus, for linguistic and language technology research and development. (2007)
- Python code to clean HTML pages -- based on the idea that elements in HTML with "clean" text have less HTML link density than "useless" elements. (2008)

- Bob Carpenter's Computational Linguistics Syllabus - Excellent collection of references in the field.
- The NLTK Toolkit - Python toolkit with thorough tutorial on all Natural Language topics and access to online datasets.
- Corpus Linguistics, Resources and Normalisation, Sylvain Pogodalla 2008