{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "Welcome to this interactive book on Statistical Natural Language Processing (NLP). NLP is a field that lies in the intersection of Computer Science, Artificial Intelligence (AI) and Linguistics with the goal to enable computers to solve tasks that require natural language _understanding_ and/or _generation_. Such tasks are omnipresent in most of our day-to-day life: think of [Machine Translation](https://www.bing.com/translator/), Automatic [Question Answering](https://www.youtube.com/watch?v=WFR3lOm_xhE) or even basic [Search](https://www.google.co.uk). All these tasks require the computer to process language in one way or another. But even if you ignore these practical applications, many people consider language to be at the heart of human intelligence, and this makes NLP (and it's more linguistically motivated cousin, [Computational Linguistics](http://en.wikipedia.org/wiki/Computational_linguistics)), important for its role in AI alone. \n", "\n", "### Statistical NLP\n", "NLP is a vast field with beginnings dating back to at least the 1960s, and it is difficult to give a full account of every aspect of NLP. Hence, this book focuses on a sub-field of NLP termed Statistical NLP (SNLP). In SNLP computers aren't directly programmed to process language; instead, they _learn_ how language should be processed based on the _statistics_ of a corpus of natural language. For example, a statistical machine translation system's behaviour is affected by the statistics of a _parallel_ corpus where each document in one language is paired with its translation in another. This approach has been dominating NLP research for almost two decades now, and has seen widespread in industry too. Notice that while Statistics and Machine Learning are, in general, quite different fields, for the purposes of this book we will mostly identify Statistical NLP with Machine Learning-based NLP. \n", "\n", "### Structure of this Book\n", "We think that to understand and apply SNLP in practice one needs knowledge of the following:\n", "\n", " * Tasks (e.g. Machine Translation, Syntactic Parsing)\n", " * Methods & Frameworks (e.g. Discriminative Training, Linear Chain models, Representation Learning)\n", " * Implementations (e.g. NLP data structures, efficient dynamic programming)\n", " \n", "The book is somewhat structured around the task dimension. That is, we will explore different methods, frameworks and their implementations, usually in the context of specific NLP applications. \n", "\n", "On a higher level the book is divided into *themes* that roughly correspond to learning paradigms within SNLP, and which follow a somewhat chronological order: we will start with generative learning, then discuss discriminative learning, then cover forms of weaker supervision to conclude with representation and deep learning. As an overarching theme we will use *structured prediction*, a formulation of machine learning that accounts for the fact that machine learning outputs are often not just classes, but structured objects such as sequences, trees or general graphs. This is a fitting approach, seeing as NLP tasks often require prediction of such structures.\n", "\n", "### Table Of Contents\n", "* COMPGI19 Course Logistics: [slides](chapters/compgi19.ipynb)\n", " * Assignment Info: [slides](chapters/assignments.ipynb)\n", "* Introduction: [slides](chapters/introduction.ipynb)\n", "* Structured Prediction: [notes](chapters/structured_prediction.ipynb), [slides](chapters/structured_prediction_slides.ipynb), [exercises](exercises/structured_prediction.ipynb)\n", "* Tokenisation and Sentence Splitting: [notes](chapters/tokenization.ipynb), [slides](chapters/tokenization_slides.ipynb), [exercises](exercises/tokenization.ipynb)\n", "* Generative Learning:\n", " * Language Models (MLE, smoothing): [notes](chapters/language_models.ipynb), [slides](chapters/language_models_slides.ipynb), [exercises](exercises/language_models.ipynb)\n", " * Maximum Likelihood Estimation: [notes](chapters/mle.ipynb), [slides](chapters/mle_slides.ipynb)\n", " * Machine Translation (EM algorithm, beam-search): [notes](chapters/word_mt.ipynb), [slides](chapters/word_mt_slides.ipynb), [exercises](exercises/mt.ipynb)\n", " * Constituent Parsing (PCFG, dynamic programming): [notes](chapters/parsing.ipynb), [slides](chapters/parsing_slides.ipynb), [exercises](exercises/parsing.ipynb)\n", " * Dependency Parsing (transition based parsing): [notes](chapters/Transition-based dependency parsing.ipynb), [slides](chapters/transition_slides.ipynb)\n", " \n", "* Discriminative Learning:\n", " * Text Classification (logistic regression): [notes](chapters/doc_classify.ipynb), [slides](chapters/doc_classify_slides.ipynb)\n", " * Sequence Labelling (linear chain models): [notes](chapters/sequence_labeling.ipynb), [slides](chapters/sequence_labeling_slides.ipynb)\n", "* Weak Supervision:\n", " * Relation Extraction (distant supervision, semi-supervised learning) [notes](chapters/relation_extraction.ipynb), [slides](https://www.dropbox.com/s/xqq1nwgw1i0gowr/relation-extraction.pdf?dl=0)[interactive-slides](chapters/relation_extraction_slides.ipynb)\n", "* Representation and Deep Learning\n", " * Overview and Multi-layer Perceptrons [slides](chapters/dl.ipynb)\n", " * Word Representations [slides](chapters/dl-representations.ipynb)\n", " * Textual Entailment (RNNs) [slides](chapters/dl_applications.ipynb)\n", " * Recurrent Neural Networks [slides](chapters/rnn_slides.ipynb)\n", "\n", "#### Methods\n", "We have a few dedicated method chapters:\n", "\n", "* Structured Prediction: [notes](chapters/structured_prediction.ipynb)\n", "* Maximum Likelihood Estimation: [notes](chapters/mle.ipynb)\n", "* EM-Algorithm: [notes](chapters/em.ipynb)\n", "\n", "### Interaction\n", "\n", "The best way to learn language processing with computers is \n", "to process language with computers. For this reason this book features interactive \n", "code blocks that we use to show NLP in practice, and that you can use \n", "to test and investigate methods and language. We use the [Python language](https://www.python.org/) throughout this book because it offers a large number of relevant libraries and it is easy to learn.\n", "\n", "### Installation\n", "To install the book locally and use it interactively follow the installation instruction on [GitHub](https://github.com/uclmr/stat-nlp-book).\n", "\n", "\n", "### COMPGI19 tutorials\n", "\n", "Labs:\n", "* [Lab 1](labs/lab_1.ipynb)\n", "* [Lab 2](labs/lab_2.ipynb)\n", "* [Lab 3](labs/lab_3.ipynb)\n", "* [Lab 4](labs/lab_4.ipynb)\n", "* [Lab 5](labs/lab_5.ipynb)\n", "\n", "Setup tutorials:\n", "* [Azure tutorial](tutorials/azure_tutorial.ipynb)\n" ] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }