{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2 Background" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Contents\n", "* 2.1 Human-ComputerInteraction\n", "* 2.2 Dialogue Strategy Development\n", " - 2.2.1 Conventional Development Life cycle\n", " - 2.2.2 Evaluation and Strategy Quality Control\n", " - 2.2.3 Strategy Implementation \n", " - 2.2.4 Challenges for Strategy Development\n", "* 2.3 Literature review : Learning Dialogue Strategies \n", " - 2.3.1 Machine Learning Paradigms \n", " - 2.3.2 Supervised Learning for Dialogue Strategies \n", " - 2.3.3 Dialogue as Decision Making under Uncertainty\n", " - 2.3.4 Reinforcement Learning for Dialogue Strategies \n", "* 2.4 Summary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.1 Human-Computer Interaction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### dialogue strategy & dialogue designer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* For computers, holding a conversation is difficult. Engaging in a conversation requires more than just technical language proficiency.\n", "* Humans acquire these communicative skills over time, but for a dialogue system, they need to be developed by a dialogue designer.\n", "* This usually is an expert who defines a dialogue strategy , which “tells” the system what to do in specific situations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### HCI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Human-Computer Interaction (HCI) is the study of interaction between people (users) and computers (such as dialogue systems). \n", "* Human-machine dialogue dif- fers from human-human dialogue in various ways. \n", "* The most prominent features are the lack of deep language understanding and the lack of pragmatic competence (communicative skills) of the system." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### error handling, uncertainty handling\n", "\n", "* A substantial amount of recent work targets the problem of limited language understanding capabilities with so-called \n", " - “error handling”, e.g. (Bohus, 2007; Frampton, 2008; Skantze, 2007a), or \n", " - “uncertainty handling” mechanisms, e.g. (Thomson and Young, 2010; Williams, 2006; Williams and Young, 2007a)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This book addresses the problem of pragmatic competence: how to improve the communicative skills of a system by providing effective mechanisms to develop better dialogue strategies." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.2 Dialogue Strategy Development\n", "* 2.2.1 Conventional Development Life cycle\n", "* 2.2.2 Evaluation and Strategy Quality Control\n", "* 2.2.3 Strategy Implementation " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Academic systems often aim to emulate human behaviour in order to generate ‘natural’ behaviour, whereas commercial systems are required to be robust interfaces in order to solve a specific task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* In the following we first describe \n", " - the general development cycle \n", " - for dialogue strategies \n", " - (which is commonly used in industry as well as in research). \n", "* We then focus on two central aspects of this cycle, \n", " - where techniques in research and industry differ widely: \n", " - strategy evaluation/quality control and \n", " - strategy implementation/formalisation. \n", "* We later argue for a computational learning-based approach, \n", " - where the standard development cycle is \n", " - replaced by data-driven techniques." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2.1 Conventional Development Life cycle" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2.2 Evaluation and Strategy Quality Control\n", "* 2.2.2.1 Quality Control in Industry\n", "* 2.2.2.2 Evaluation Practises in Academia\n", "* 2.2.2.3 The PARADISE Evaluation Framework" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 2.2.2.1 Quality Control in Industry" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In industry the initial design is commonly motivated by guidelines and ‘best practises’ which should help to assure the system’s usability" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 2.2.2.2 Evaluation Practises in Academia" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dialogue strategies developed in academia are usually extensively tested against some baseline in order to make scientific claims, e.g. by showing some significant differences in system behaviour. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2.2.3 The PARADISE Evaluation Framework" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* PARADISE is a widely used framework for automatic dialogue evaluation introduced by (Walker et al, 1997, 1998b, 2000). \n", "* The main idea behind PARADISE is to estimate subjective user ratings (obtained from questionnaires) from objective dialogue performance measures (such as dialogue length) which are available at system run- time. \n", "* (Walker et al, 1997) propose to model “User Satisfaction” (US) using multiple linear regression (see Equation 2.1).\n", "* User Satisfaction is calculated as the arithmetic mean of nine user judgements related to different quality aspects (see Table 2.1), which are rated on Likert scales . \n", "* A likert scale is a discrete rating scale where the subject indicates his/her level of agreement with a statement (e.g. from “strongly agree” to “strongly disagree”)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* κ : A param- eter related to task success (either the coefficient κ calculated from an external an- notation of correctly identified concepts, or a direct user judgment on perceived task success)\n", "* $C_i$ : additional interaction parameters measuring dialogue efficiency and quality " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2.2.4 Strategy Re-Implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* After testing and evaluation, an error analysis is performed and the results are then used to re-design the strategy. \n", "* However, there is no framework which describes how evaluation results are best transferred into code." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2.3 Strategy Implementation\n", "* 2.2.3.1 Implementation Practises in Industry\n", "* 2.2.3.2 Implementation Practises in Academia" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2.3.1 Implementation Practises in Industry" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Most commercial systems rely on Finite State Automata (FSA) controlled by menus, forms, or frames\n", "* However, this development methodology is limited by the fact that every change in the conversation must be explicitly represented by a transition between two nodes in the network. \n", "* Dialogue strategies designed as FSA are based on hand-crafted rules which usually lack context-sensitive behaviour, are not very flexible, cannot handle unseen situations, and are not reusable from task to task.\n", "* Furthermore, FSA easily become intractable for more complex tasks and cannot model complex reasoning." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2.3.2 Implementation Practises in Academia" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Most research systems to date have been based either on \n", " - planning with logical inference, \n", " - e.g. (Blaylock and Allen, 2005; Steedman and Petrick, 2007), \n", " - or they are implemented in the “Information State Update” (ISU) approach \n", " - using frames or \n", " - tree sub-structures as control mechanism, \n", " - e.g. (Larsson and Traum, 2000; Lemon et al, 2001). \n", "* More recently, statistical systems \n", " - using machine learning approaches \n", " - have become more prevalent, \n", " - for example \n", " - (Griol et al, 2008; Henderson et al, 2008; Thomson and Young, 2010; Young et al, 2007, 2009), and see (Frampton and Lemon, 2009) for a survey." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Planning approaches are mostly used for complex tasks, like collaborative problem solving, intelligent assistants, and tutorial dialogues. \n", "* ISU-based systems are used for a variety of applications with different complexity (see Table 2.2 for refer- ences). \n", "* Both approaches have an higher expressive power than simple FSA, and can lead to more sophisticated (e.g. context-dependent) strategies. \n", "* On the other hand, these systems are harder to maintain and debug." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2.4 Challenges for Strategy Development" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* How can this chasm be bridged? \n", "* Is there a third option which can meet the challenges for both \n", " - cost-effective industrial speech interfaces and \n", " - the advanced dialogue agents of academic research? \n", "* What requirements does it have to meet?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### organic interface\n", "Zue calls the dialogue system of the future an “organic interface”, that can learn, grow, re-congure, and repair itself.\n", "* robust towards unseen events\n", " - generalise to unseen events\n", "* context sensitive\n", " - dynamically adapt to every possible system context\n", "* adaptive to the application environment\n", " - automatically adapt to different situations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.3 Literature review : Learning Dialogue Strategies \n", "* 2.3.1 Machine Learning Paradigms \n", "* 2.3.2 Supervised Learning for Dialogue Strategies \n", "* 2.3.3 Dialogue as Decision Making under Uncertainty\n", "* 2.3.4 Reinforcement Learning for Dialogue Strategies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3.1 Machine Learning Paradigms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, there are three major learning paradigms, each corresponding to a particular abstract learning task:\n", "\n", "* Supervised Learning (SL)\n", "* Unsupervised Learning (US)\n", "* Reinforcement Learning (RL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To date, different Machine Learning approaches have been applied to automatic dialogue management:\n", "\n", "* Supervised approaches, which learn a strategy which mimic a given data set;\n", "* Approaches based on decision theory, which are supervised approaches in the sense that they optimise action choice with respect to some local costs as observed in the data. In contrast to SL they explicitly model uncertainty in the observation;\n", "* Reinforcement Learning-based approaches, which are related to decision theoretic approaches, but optimise action choice globally as a sequence of decisions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3.2 Supervised Learning for Dialogue Strategies " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* example-based learning\n", "* human assisted design" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3.3 Dialogue as Decision Making under Uncertainty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Action selection is guided by the following optimisation:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* In this framework the agent selects the action A = a that maximizes expected utility, EU(a|o), where o are observed events.\n", "* where utility(a,s) expresses the utility of taking action a when the state of the world is s. \n", "* The utility function is trained via “local” user ratings. \n", "* Users rate the appropriateness of an action in a certain state via a GUI while they are interacting with the system" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3.4 Reinforcement Learning for Dialogue Strategies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In contrast to the above approaches, Reinforcement Learning treats dialogue strategy learning as a sequential optimisation problem, leading to strategies which are globally optimal \n", "* Markov Decision Processes (MDPs)\n", "* Partially Observable Markov Decision Pro- cess (POMDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.4 Summary" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# 참고자료\n", "* [1] Reinforcement Learning for Adaptive Dialogue Systems: A Data-driven Methodology for Dialogue Management and Natural Language Generation - https://www.amazon.com/Reinforcement-Learning-Adaptive-Dialogue-Systems/dp/3642439845" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 0 }