{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 1.2: Word2vec preprocessing (20 points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Preprocessing is not the most exciting part of NLP, but it is still one of the most important ones. Your task is to preprocess raw text (you can use your own, or [this one](http://mattmahoney.net/dc/text8.zip). For this task text preprocessing mostly consists of:\n", "\n", "1. cleaning (mostly, if your dataset is from social media or parsed from the internet)\n", "1. tokenization\n", "1. building the vocabulary and choosing its size. Use only high-frequency words, change all other words to UNK or handle it in your own manner. You can use `collections.Counter` for that.\n", "1. assigning each token a number (numericalization). In other words, make word2index и index2word objects.\n", "1. data structuring and batching - make X and y matrices generator for word2vec (explained in more details below)\n", "\n", "**ATTN!:** If you use your own data, please, attach a download link. \n", "\n", "Your goal is to make **Batcher** class which returns two numpy tensors with word indices. It should be possible to use one for word2vec training. You can implement batcher for Skip-Gram or CBOW architecture, the picture below can be helpful to remember the difference.\n", "\n", "![text](https://raw.githubusercontent.com/deepmipt/deep-nlp-seminars/651804899d05b96fc72b9474404fab330365ca09/seminar_02/pics/architecture.png)\n", "\n", "There are several ways to do it right. Shapes could be `x_batch.shape = (batch_size, 2*window_size)`, `y_batch.shape = (batch_size,)` for CBOW or `(batch_size,)`, `(batch_size, 2*window_size)` for Skip-Gram. You should **not** do negative sampling here.\n", "\n", "They should be adequately parametrized: CBOW(window_size, ...), SkipGram(window_size, ...). You should implement only one batcher in this task; and it's up to you which one to chose.\n", "\n", "Useful links:\n", "1. [Word2Vec Tutorial - The Skip-Gram Model](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/)\n", "1. [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf)\n", "1. [Distributed Representations of Words and Phrases and their Compositionality](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)\n", "\n", "You can write the code in this notebook, or in a separate file. It can be reused for the next task. The result of your work should represent that your batch has a proper structure (right shapes) and content (words should be from one context, not some random indices). To show that, translate indices back to words and print them to show something like this:\n", "\n", "```\n", "text = ['first', 'used', 'against', 'early', 'working', 'class', 'radicals', 'including']\n", "\n", "window_size = 2\n", "\n", "# CBOW:\n", "indices_to_words(x_batch) = \\\n", " [['first', 'used', 'early', 'working'],\n", " ['used', 'against', 'working', 'class'],\n", " ['against', 'early', 'class', 'radicals'],\n", " ['early', 'working', 'radicals', 'including']]\n", "\n", "indices_to_words(labels_batch) = ['against', 'early', 'working', 'class']\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }