Coverage for nltk.parse.pchart : 65%
![](keybd_closed.png)
Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# Natural Language Toolkit: Probabilistic Chart Parsers # # Copyright (C) 2001-2012 NLTK Project # Author: Edward Loper <edloper@gradient.cis.upenn.edu> # Steven Bird <sb@csse.unimelb.edu.au> # URL: <http://www.nltk.org/> # For license information, see LICENSE.TXT
Classes and interfaces for associating probabilities with tree structures that represent the internal organization of a text. The probabilistic parser module defines ``BottomUpProbabilisticChartParser``.
``BottomUpProbabilisticChartParser`` is an abstract class that implements a bottom-up chart parser for ``PCFG`` grammars. It maintains a queue of edges, and adds them to the chart one at a time. The ordering of this queue is based on the probabilities associated with the edges, allowing the parser to expand more likely edges before less likely ones. Each subclass implements a different queue ordering, producing different search strategies. Currently the following subclasses are defined:
- ``InsideChartParser`` searches edges in decreasing order of their trees' inside probabilities. - ``RandomChartParser`` searches edges in random order. - ``LongestChartParser`` searches edges in decreasing order of their location's length.
The ``BottomUpProbabilisticChartParser`` constructor has an optional argument beam_size. If non-zero, this controls the size of the beam (aka the edge queue). This option is most useful with InsideChartParser. """
##////////////////////////////////////////////////////// ## Bottom-Up PCFG Chart Parser ##//////////////////////////////////////////////////////
# [XX] This might not be implemented quite right -- it would be better # to associate probabilities with child pointer lists.
# Probabilistic edges
def from_production(production, index, p): production.rhs(), 0)
# Rules using probabilistic edges
# Make sure the rule is applicable. next(left_edge) == right_edge.lhs() and left_edge.is_incomplete() and right_edge.is_complete()): return
# Construct the new edge. span=(left_edge.start(), right_edge.end()), lhs=left_edge.lhs(), rhs=left_edge.rhs(), dot=left_edge.dot()+1)
# Add it to the chart, with appropriate child pointers.
# If we changed the chart, then generate the edge.
# edge1 = left_edge; edge2 = right_edge lhs=next(edge1)): else: # edge2 = left_edge; edge1 = right_edge next=edge1.lhs()):
""" An abstract bottom-up parser for ``PCFG`` grammars that uses a ``Chart`` to record partial results. ``BottomUpProbabilisticChartParser`` maintains a queue of edges that can be added to the chart. This queue is initialized with edges for each token in the text that is being parsed. ``BottomUpProbabilisticChartParser`` inserts these edges into the chart one at a time, starting with the most likely edges, and proceeding to less likely edges. For each edge that is added to the chart, it may become possible to insert additional edges into the chart; these are added to the queue. This process continues until enough complete parses have been generated, or until the queue is empty.
The sorting order for the queue is not specified by ``BottomUpProbabilisticChartParser``. Different sorting orders will result in different search strategies. The sorting order for the queue is defined by the method ``sort_queue``; subclasses are required to provide a definition for this method.
:type _grammar: PCFG :ivar _grammar: The grammar used to parse sentences. :type _trace: int :ivar _trace: The level of tracing output that should be generated when parsing a text. """ """ Create a new ``BottomUpProbabilisticChartParser``, that uses ``grammar`` to parse texts.
:type grammar: PCFG :param grammar: The grammar used to parse texts. :type beam_size: int :param beam_size: The maximum length for the parser's edge queue. :type trace: int :param trace: The level of tracing that should be used when parsing a text. ``0`` will generate no tracing output; and higher numbers will produce more verbose tracing output. """ raise ValueError("The grammar must be probabilistic WeightedGrammar")
return self._grammar
""" Set the level of tracing output that should be generated when parsing a text.
:type trace: int :param trace: The trace level. A trace level of ``0`` will generate no tracing output; and higher trace levels will produce more verbose tracing output. :rtype: None """ self._trace = trace
# TODO: change this to conform more with the standard ChartParser
# Chart parser rules.
# Our queue!
# Initialize the chart. print(' %-50s [%s]' % (chart.pp_edge(edge,width=2), edge.prob()))
# Re-sort the queue.
# Prune the queue to the correct size if a beam was defined
# Get the best edge. print(' %-50s [%s]' % (chart.pp_edge(edge,width=2), edge.prob()))
# Apply BU & FR to it.
# Get a list of complete parses.
# Assign probabilities to the trees.
# Sort by probability
# Get the prob of the CFG production. else:
# Get the probs of children.
""" Sort the given queue of ``Edge`` objects, placing the edge that should be tried first at the beginning of the queue. This method will be called after each ``Edge`` is added to the queue.
:param queue: The queue of ``Edge`` objects to sort. Each edge in this queue is an edge that could be added to the chart by the fundamental rule; but that has not yet been added. :type queue: list(Edge) :param chart: The chart being used to parse the text. This chart can be used to provide extra information for sorting the queue. :type chart: Chart :rtype: None """ raise NotImplementedError()
""" Discard items in the queue if the queue is longer than the beam.""" for edge in queue[:split]: print(' %-50s [DISCARDED]' % chart.pp_edge(edge,2))
""" A bottom-up parser for ``PCFG`` grammars that tries edges in descending order of the inside probabilities of their trees. The "inside probability" of a tree is simply the probability of the entire tree, ignoring its context. In particular, the inside probability of a tree generated by production *p* with children *c[1], c[2], ..., c[n]* is *P(p)P(c[1])P(c[2])...P(c[n])*; and the inside probability of a token is 1 if it is present in the text, and 0 if it is absent.
This sorting order results in a type of lowest-cost-first search strategy. """ # Inherit constructor. """ Sort the given queue of edges, in descending order of the inside probabilities of the edges' trees.
:param queue: The queue of ``Edge`` objects to sort. Each edge in this queue is an edge that could be added to the chart by the fundamental rule; but that has not yet been added. :type queue: list(Edge) :param chart: The chart being used to parse the text. This chart can be used to provide extra information for sorting the queue. :type chart: Chart :rtype: None """
# Eventually, this will become some sort of inside-outside parser: # class InsideOutsideParser(BottomUpProbabilisticChartParser): # def __init__(self, grammar, trace=0): # # Inherit docs. # BottomUpProbabilisticChartParser.__init__(self, grammar, trace) # # # Find the best path from S to each nonterminal # bestp = {} # for production in grammar.productions(): bestp[production.lhs()]=0 # bestp[grammar.start()] = 1.0 # # for i in range(len(grammar.productions())): # for production in grammar.productions(): # lhs = production.lhs() # for elt in production.rhs(): # bestp[elt] = max(bestp[lhs]*production.prob(), # bestp.get(elt,0)) # # self._bestp = bestp # for (k,v) in self._bestp.items(): print k,v # # def _cmp(self, e1, e2): # return cmp(e1.structure()[PROB]*self._bestp[e1.lhs()], # e2.structure()[PROB]*self._bestp[e2.lhs()]) # # def sort_queue(self, queue, chart): # queue.sort(self._cmp)
""" A bottom-up parser for ``PCFG`` grammars that tries edges in random order. This sorting order results in a random search strategy. """ # Inherit constructor
""" A bottom-up parser for ``PCFG`` grammars that tries edges in whatever order. """ # Inherit constructor
""" A bottom-up parser for ``PCFG`` grammars that tries longer edges before shorter ones. This sorting order results in a type of best-first search strategy. """ # Inherit constructor
##////////////////////////////////////////////////////// ## Test Code ##//////////////////////////////////////////////////////
""" A demonstration of the probabilistic parsers. The user is prompted to select which demo to run, and how many parses should be found; and then each parser is run on the same demo, and a summary of the results are displayed. """ import sys, time from nltk import tokenize, toy_pcfg1, toy_pcfg2 from nltk.parse import pchart
# Define two demos. Each demo has a sentence and a grammar. demos = [('I saw John with my telescope', toy_pcfg1), ('the boy saw Jack with Bob under the table with a telescope', toy_pcfg2)]
if choice is None: # Ask the user which demo they want to use. print() for i in range(len(demos)): print('%3s: %s' % (i+1, demos[i][0])) print(' %r' % demos[i][1]) print() print('Which demo (%d-%d)? ' % (1, len(demos)), end=' ') choice = int(sys.stdin.readline().strip())-1 try: sent, grammar = demos[choice] except: print('Bad sentence number') return
# Tokenize the sentence. tokens = sent.split()
# Define a list of parsers. We'll use all parsers. parsers = [ pchart.InsideChartParser(grammar), pchart.RandomChartParser(grammar), pchart.UnsortedChartParser(grammar), pchart.LongestChartParser(grammar), pchart.InsideChartParser(grammar, beam_size = len(tokens)+1) # was BeamParser ]
# Run the parsers on the tokenized sentence. times = [] average_p = [] num_parses = [] all_parses = {} for parser in parsers: print('\ns: %s\nparser: %s\ngrammar: %s' % (sent,parser,grammar)) parser.trace(3) t = time.time() parses = parser.nbest_parse(tokens) times.append(time.time()-t) if parses: p = reduce(lambda a,b:a+b.prob(), parses, 0)/len(parses) else: p = 0 average_p.append(p) num_parses.append(len(parses)) for p in parses: all_parses[p.freeze()] = 1
# Print some summary statistics print() print(' Parser Beam | Time (secs) # Parses Average P(parse)') print('------------------------+------------------------------------------') for i in range(len(parsers)): print('%18s %4d |%11.4f%11d%19.14f' % (parsers[i].__class__.__name__, parsers[i].beam_size, times[i],num_parses[i],average_p[i])) parses = all_parses.keys() if parses: p = reduce(lambda a,b:a+b.prob(), parses, 0)/len(parses) else: p = 0 print('------------------------+------------------------------------------') print('%18s |%11s%11d%19.14f' % ('(All Parses)', 'n/a', len(parses), p))
if draw_parses is None: # Ask the user if we should draw the parses. print() print('Draw parses (y/n)? ', end=' ') draw_parses = sys.stdin.readline().strip().lower().startswith('y') if draw_parses: from nltk.draw.tree import draw_trees print(' please wait...') draw_trees(*parses)
if print_parses is None: # Ask the user if we should print the parses. print() print('Print parses (y/n)? ', end=' ') print_parses = sys.stdin.readline().strip().lower().startswith('y') if print_parses: for parse in parses: print(parse)
demo() |