Quizz 04: Word Embeddings
This quizz covers material from the fourth lecture on Word Embeddings.
-
Consider the task of predicting the POS tag of a word using a model similar to the one we discussed in class predicting the language
of documents. Given a tagset of dimension T, what would be the task-specific embedding of words the model would learn? What would be
the dimension of the embeddings?
-
List three key properties of word embedding representations which distinguish them from one hot encodings
-
Give three examples of systematic lexical semantic relations (that is, semantic relations which may hold between any pair of words):
-
Explain the intuition behind distributional methods - that is, why do we believe that solving the task of predicting a word
given its context yields embeddings which capture lexical semantics?
-
Describe what is the Continuous Bag of Words (CBoW) model for learning word embeddings.
List two ways in which the task modeled by CBoW is different from learning an n-gram language model.
Last modified 09 Nov 2018