Quizz 04: Word Embeddings

This quizz covers material from the fourth lecture on Word Embeddings.
  1. Consider the task of predicting the POS tag of a word using a model similar to the one we discussed in class predicting the language of documents. Given a tagset of dimension T, what would be the task-specific embedding of words the model would learn? What would be the dimension of the embeddings?





  2. List three key properties of word embedding representations which distinguish them from one hot encodings





  3. Give three examples of systematic lexical semantic relations (that is, semantic relations which may hold between any pair of words):








  4. Explain the intuition behind distributional methods - that is, why do we believe that solving the task of predicting a word given its context yields embeddings which capture lexical semantics?








  5. Describe what is the Continuous Bag of Words (CBoW) model for learning word embeddings. List two ways in which the task modeled by CBoW is different from learning an n-gram language model.










  6. Last modified 09 Nov 2018