{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Natural Language Processing (NDAK18000U)\n", "Materials from this interactive book are used throughout the Natural Language Processing course at the Department of Computer Science, University of Copenhagen.\n", "The official course description can be found [here](https://kurser.ku.dk/course/ndak18000u/2022-2023).\n", "Materials covered each week are listed below. The course schedule and materials are tentative and subject to minor changes. Most reading material is from [*Speech and Language Processing* by Jurafsky & Martin](https://web.stanford.edu/~jurafsky/slp3).\n", "\n", "| Week | Reading (before lecture) | Lecture (Tuesday) | Lab (Friday & Monday) | Lab notebook |\n", "| --- | :--- | :--- | :--- | :--- |\n", "| 36 | | 6. Sep. 2022: | 9. & 12. Sep. 2022: | lab 1 |\n", "| 37 | | 13. Sep. 2022: | 16. & 19. Sep. 2022: | lab 2 |\n", "| 38 | Chapter 9 up to end of 9.2 | 20. Sep. 2022: | 23. & 26. Sep. 2022: | lab 3 |\n", "| 39 | | 27. Sep. 2022: | 30. Sep. & 3. Oct. 2022: | lab 4 |\n", "| 40 | | 4. Oct. 2022: | 7. & 10. Oct. 2022: | lab 5 |\n", "| 41 | | 11. Oct. 2022: | 14. & 24. Oct. 2022: | lab 6 |\n", "| 43 | | 25. Oct. 2022: | 28. & 31. Oct. 2022: Project help. | |\n", "| 44 | Chapter 14, *except* 14.5 | 1. Nov. 2022: | 4. Nov. 2022: Project help. | |\n" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### Introduction\n", "NLP is a field that lies in the intersection of Computer Science, Artificial Intelligence (AI) and Linguistics with the goal to enable computers to solve tasks that require natural language _understanding_ and/or _generation_. Such tasks are omnipresent in most of our day-to-day life: think of [Machine Translation](https://www.bing.com/translator/), Automatic [Question Answering](https://www.youtube.com/watch?v=WFR3lOm_xhE) or even basic [Search](https://www.google.co.uk). All these tasks require the computer to process language in one way or another. But even if you ignore these practical applications, many people consider language to be at the heart of human intelligence, and this makes NLP (and its more linguistically motivated cousin, [Computational Linguistics](http://en.wikipedia.org/wiki/Computational_linguistics)), important for its role in AI alone.\n", "\n", "NLP is a vast field with beginnings dating back to at least the 1960s, and it is difficult to give a full account of every aspect of NLP. Hence, this book focuses on a sub-field of NLP termed Statistical NLP (SNLP). In SNLP computers aren't directly programmed to process language; instead, they _learn_ how language should be processed based on the _statistics_ of a corpus of natural language. For example, a statistical machine translation system's behaviour is affected by the statistics of a _parallel_ corpus where each document in one language is paired with its translation in another. This approach has been dominating NLP research for over two decades now, and has seen widespread in industry too. Notice that while Statistics and Machine Learning are, in general, quite different fields, for the purposes of this book we will mostly identify Statistical NLP with Machine Learning-based NLP. More specifically, we will focus mostly on Deep Learning (i.e., Neural Network) methods, as they generally are the state of the art in Machine Learning for NLP today.\n", "\n", "### Structure of this Book\n", "Note that this book was originally developed for a [15 ECTS course at UCL](https://github.com/uclmr/stat-nlp-book), so we will not be covering all topics of the book and will cover some in less depth. For completeness and context, you can still access all book materials below.\n", "\n", "We think that to understand and apply SNLP in practice one needs knowledge of the following:\n", "\n", " * Tasks (e.g. Machine Translation, Syntactic Parsing)\n", " * Methods & Frameworks (e.g. Discriminative Training, Linear Chain models, Representation Learning)\n", " * Implementations (e.g. NLP data structures, efficient dynamic programming)\n", " \n", "The book is somewhat structured around the task dimension. That is, we will explore different methods, frameworks and their implementations, usually in the context of specific NLP applications.\n", "\n", "On a higher level the book is divided into *themes* that roughly correspond to learning paradigms within SNLP, and which follow a somewhat chronological order: we will start with generative learning, then discuss discriminative learning, then cover forms of weaker supervision to conclude with representation and deep learning. As an overarching theme we will use *structured prediction*, a formulation of machine learning that accounts for the fact that machine learning outputs are often not just classes, but structured objects such as sequences, trees or general graphs. This is a fitting approach, seeing as NLP tasks often require prediction of such structures.\n", "\n", "The best way to learn language processing with computers is\n", "to process language with computers. For this reason this book features interactive\n", "code blocks that we use to show NLP in practice, and that you can use\n", "to test and investigate methods and language. We use [Jupyter](https://jupyter.org/) notebooks written in [Python](https://www.python.org/) throughout the book.\n", "\n", "\n", "### Table Of Contents\n", "* Introduction to NLP: [slides1](chapters/introduction.ipynb), [slides2](chapters/intro_short.ipynb)\n", "* Methods\n", " * Structured Prediction: [notes](chapters/structured_prediction.ipynb), [slides](chapters/structured_prediction_slides.ipynb), [exercises](exercises/structured_prediction.ipynb)\n", " * Maximum Likelihood Estimation: [notes](chapters/mle.ipynb), [slides](chapters/mle_slides.ipynb)\n", " * Expectation–maximization (EM) Algorithm: [notes](chapters/em.ipynb)\n", "* Tokenisation and Sentence Splitting: [notes](chapters/tokenization.ipynb), [slides](chapters/tokenization_slides.ipynb), [exercises](exercises/tokenization.ipynb)\n", "* Generative Learning:\n", " * Language Models (MLE, smoothing): [notes](chapters/language_models.ipynb), [slides](chapters/language_models_slides.ipynb), [exercises](exercises/language_models.ipynb)\n", " * Machine Translation (beam-search, encoder-decoder models): [notes](chapters/word_mt.ipynb), [slides1](chapters/word_mt_slides.ipynb), [slides2](chapters/neural_mt_slides.ipynb) [exercises](exercises/mt.ipynb), [slides3](chapters/nmt_slides_active.ipynb)\n", " * Constituent Parsing (PCFG, dynamic programming): [notes](chapters/parsing.ipynb), [slides](chapters/parsing_slides.ipynb), [exercises](exercises/parsing.ipynb)\n", " * Dependency Parsing (transition based parsing): [notes](chapters/transition-based_dependency_parsing.ipynb), [slides1](chapters/transition_slides.ipynb), [slides2](chapters/dependency_parsing_slides.ipynb), [slides3](chapters/dependency_parsing_slides_active.ipynb)\n", "* Discriminative Learning:\n", " * Text Classification (logistic regression): [notes](chapters/doc_classify.ipynb), [slides1](chapters/doc_classify_slides.ipynb), [slides2](chapters/doc_classify_slides_short.ipynb)\n", " * Sequence Labelling (linear chain models): [notes](chapters/sequence_labeling.ipynb), [slides](chapters/sequence_labeling_slides.ipynb)\n", " * Sequence Labelling (CRF): [slides](chapters/sequence_labeling_crf_slides.ipynb)\n", "* Weak Supervision:\n", " * Relation Extraction (distant supervision, semi-supervised learning): [notes](chapters/relation_extraction.ipynb), [slides1](https://www.dropbox.com/s/xqq1nwgw1i0gowr/relation-extraction.pdf?dl=0), [interactive-slides](chapters/relation_extraction_slides.ipynb), [slides2](chapters/information_extraction_slides.ipynb)\n", "* Representation and Deep Learning\n", " * Overview and Multi-layer Perceptrons: [slides](chapters/dl.ipynb)\n", " * Word Representations: [slides](chapters/dl-representations_simple.ipynb)\n", " * Contextualised Word Representations: [slides](chapters/dl-representations_contextual.ipynb)\n", " * Recurrent Neural Networks: [slides1](chapters/rnn_slides.ipynb), [slides2](chapters/rnn_slides_ucph.ipynb)\n", " * Attention: [slides1](chapters/attention_slides.ipynb), [slides2](chapters/attention_slides2.ipynb)\n", " * Transfer Learning (multitask, cross-lingual): [slides1](chapters/transfer_learning_slides.ipynb), [slides2](chapters/xling_transfer_learning_slides.ipynb)\n", " * Textual Entailment: [slides](chapters/dl_applications.ipynb)\n", " * Question Answering: [slides](chapters/question_answering_slides.ipynb)\n", " * Interpretability: [slides](chapters/interpretability_slides.ipynb)\n", "\n", "### Installation\n", "To install the book locally and use it interactively follow the installation instruction on [GitHub](https://github.com/copenlu/stat-nlp-book).\n", "\n", "Setup tutorials:\n", "* [README](README.md)\n", "* [Azure tutorial](tutorials/azure_tutorial.ipynb)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 4 }