Coverage for nltk.corpus.reader.util : 64%
![](keybd_closed.png)
Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# Natural Language Toolkit: Corpus Reader Utilities # # Copyright (C) 2001-2012 NLTK Project # Author: Steven Bird <sb@ldc.upenn.edu> # Edward Loper <edloper@gradient.cis.upenn.edu> # URL: <http://www.nltk.org/> # For license information, see LICENSE.TXT
# Use the c version of ElementTree, which is faster, if possible: except ImportError: from xml.etree import ElementTree
###################################################################### #{ Corpus View ######################################################################
""" A 'view' of a corpus file, which acts like a sequence of tokens: it can be accessed by index, iterated over, etc. However, the tokens are only constructed as-needed -- the entire corpus is never stored in memory at once.
The constructor to ``StreamBackedCorpusView`` takes two arguments: a corpus fileid (specified as a string or as a ``PathPointer``); and a block reader. A "block reader" is a function that reads zero or more tokens from a stream, and returns them as a list. A very simple example of a block reader is:
>>> def simple_block_reader(stream): ... return stream.readline().split()
This simple block reader reads a single line at a time, and returns a single token (consisting of a string) for each whitespace-separated substring on the line.
When deciding how to define the block reader for a given corpus, careful consideration should be given to the size of blocks handled by the block reader. Smaller block sizes will increase the memory requirements of the corpus view's internal data structures (by 2 integers per block). On the other hand, larger block sizes may decrease performance for random access to the corpus. (But note that larger block sizes will *not* decrease performance for iteration.)
Internally, ``CorpusView`` maintains a partial mapping from token index to file position, with one entry per block. When a token with a given index *i* is requested, the ``CorpusView`` constructs it as follows:
1. First, it searches the toknum/filepos mapping for the token index closest to (but less than or equal to) *i*.
2. Then, starting at the file position corresponding to that index, it reads one block at a time using the block reader until it reaches the requested token.
The toknum/filepos mapping is created lazily: it is initially empty, but every time a new block is read, the block's initial token is added to the mapping. (Thus, the toknum/filepos map has one entry per block.)
In order to increase efficiency for random access patterns that have high degrees of locality, the corpus view may cache one or more blocks.
:note: Each ``CorpusView`` object internally maintains an open file object for its underlying corpus file. This file should be automatically closed when the ``CorpusView`` is garbage collected, but if you wish to close it manually, use the ``close()`` method. If you access a ``CorpusView``'s items after it has been closed, the file object will be automatically re-opened.
:warning: If the contents of the file are modified during the lifetime of the ``CorpusView``, then the ``CorpusView``'s behavior is undefined.
:warning: If a unicode encoding is specified when constructing a ``CorpusView``, then the block reader may only call ``stream.seek()`` with offsets that have been returned by ``stream.tell()``; in particular, calling ``stream.seek()`` with relative offsets, or with offsets based on string lengths, may lead to incorrect behavior.
:ivar _block_reader: The function used to read a single block from the underlying file stream. :ivar _toknum: A list containing the token index of each block that has been processed. In particular, ``_toknum[i]`` is the token index of the first token in block ``i``. Together with ``_filepos``, this forms a partial mapping between token indices and file positions. :ivar _filepos: A list containing the file position of each block that has been processed. In particular, ``_toknum[i]`` is the file position of the first character in block ``i``. Together with ``_toknum``, this forms a partial mapping between token indices and file positions. :ivar _stream: The stream used to access the underlying corpus file. :ivar _len: The total number of tokens in the corpus, if known; or None, if the number of tokens is not yet known. :ivar _eofpos: The character position of the last character in the file. This is calculated when the corpus view is initialized, and is used to decide when the end of file has been reached. :ivar _cache: A cache of the most recently read block. It is encoded as a tuple (start_toknum, end_toknum, tokens), where start_toknum is the token index of the first token in the block; end_toknum is the token index of the first token not in the block; and tokens is a list of the tokens in the block. """ encoding=None, source=None): """ Create a new corpus view, based on the file ``fileid``, and read with ``block_reader``. See the class documentation for more information.
:param fileid: The path to the file that is read by this corpus view. ``fileid`` can either be a string or a ``PathPointer``.
:param startpos: The file position at which the view will start reading. This can be used to skip over preface sections.
:param encoding: The unicode encoding that should be used to read the file's contents. If no encoding is specified, then the file's contents will be read as a non-unicode string (i.e., a str).
:param source: If specified, then use an ``SourcedStringStream`` to annotate all strings read from the file with information about their start offset, end ofset, and docid. The value of ``source`` will be used as the docid. """ # Initialize our toknum/filepos mapping. # We don't know our length (number of tokens) yet.
"""This variable is set to the index of the next token that will be read, immediately before ``self.read_block()`` is called. This is provided for the benefit of the block reader, which under rare circumstances may need to know the current token number."""
"""This variable is set to the index of the next block that will be read, immediately before ``self.read_block()`` is called. This is provided for the benefit of the block reader, which under rare circumstances may need to know the current block number."""
# Find the length of the file. else: self._eofpos = os.stat(self._fileid).st_size except Exception as exc: raise ValueError('Unable to open or access %r -- %s' % (fileid, exc))
# Maintain a cache of the most recently read block, to # increase efficiency of random access.
The fileid of the file that is accessed by this view.
:type: str or PathPointer""")
""" Read a block from the input stream.
:return: a block of tokens from the input stream :rtype: list(any) :param stream: an input stream :type stream: stream """ raise NotImplementedError('Abstract Method')
""" Open the file stream associated with this corpus view. This will be called performed if any value is read from the view while its file stream is closed. """ elif self._encoding: self._stream = SeekableUnicodeStreamReader( open(self._fileid, 'rb'), self._encoding) else: self._stream = open(self._fileid, 'rb')
""" Close the file stream associated with this corpus view. This can be useful if you are worried about running out of file handles (although the stream should automatically be closed upon garbage collection of the corpus view). If the corpus view is accessed after it is closed, it will be automatically re-opened. """
# iterate_from() sets self._len when it reaches the end # of the file:
# Check if it's in the cache. return self._cache[2][start-offset:stop-offset] # Construct & return the result. else: # Handle negative indices # Check if it's in the cache. # Use iterate_from to extract it. raise IndexError('index out of range')
# If we wanted to be thread-safe, then this method would need to # do some locking. # Start by feeding from the cache, if possible.
# Decide where in the file we should start. If `start` is in # our mapping, then we can jump straight to the correct block; # otherwise, start at the last block we've processed. else:
# Open the stream, if it's not open already.
# Each iteration through this loop, we read a single block # from the stream. # Read the next block. 'block reader %s() should return list or tuple.' % self.read_block.__name__) 'block reader %s() should consume at least 1 byte (filepos=%d)' % (self.read_block.__name__, filepos))
# Update our cache.
# Update our mapping. else: # Check for consistency: 'inconsistent block reader (num chars read)') 'inconsistent block reader (num tokens returned)')
# If we reached the end of the file, then update self._len # Generate the tokens in this block (but skip any tokens # before start_tok). Note that between yields, our state # may be modified. # If we're at the end of the file, then we're done. # Update our indices
# If we reach this point, then we should know our length.
# Use concat for these, so we can use a ConcatenatedCorpusView # when possible. return concat([self, other]) return concat([other, self]) return concat([self] * count) return concat([self] * count)
""" A 'view' of a corpus file that joins together one or more ``StreamBackedCorpusViews<StreamBackedCorpusView>``. At most one file handle is left open at any time. """ """A list of the corpus subviews that make up this concatenation."""
"""A list of offsets, indicating the index at which each subview begins. In particular:: offsets[i] = sum([len(p) for p in pieces[:i]])"""
Before a new subview is accessed, this subview will be closed."""
# Iterate to the end of the corpus.
for piece in self._pieces: piece.close()
# If we've got another piece open, close it first.
# Get everything we can from this piece.
# Update the offset table.
# Move on to the next piece.
""" Concatenate together the contents of multiple documents from a single corpus, using an appropriate concatenation function. This utility function is used by corpus readers when the user requests more than one document at a time. """ raise ValueError('concat() expects at least one object!')
# If they're all strings, use string concatenation.
# If they're all corpus views, then use ConcatenatedCorpusView. ConcatenatedCorpusView)): else:
# If they're all lazy sequences, use a lazy concatenation else: return LazyConcatenation(docs)
# Otherwise, see what we can do:
return reduce((lambda a,b:a+b), docs, [])
return reduce((lambda a,b:a+b), docs, ())
xmltree = ElementTree.Element('documents') for doc in docs: xmltree.append(doc) return xmltree
# No method found!
###################################################################### #{ Corpus View for Pickled Sequences ######################################################################
""" A stream backed corpus view for corpus files that consist of sequences of serialized Python objects (serialized using ``pickle.dump``). One use case for this class is to store the result of running feature detection on a corpus to disk. This can be useful when performing feature detection is expensive (so we don't want to repeat it); but the corpus is too large to store in memory. The following example illustrates this technique:
.. doctest:: :options: +SKIP
>>> from nltk.corpus.reader.util import PickleCorpusView >>> from nltk.util import LazyMap >>> feature_corpus = LazyMap(detect_features, corpus) >>> PickleCorpusView.write(feature_corpus, some_fileid) >>> pcv = PickleCorpusView(some_fileid) """
""" Create a new corpus view that reads the pickle corpus ``fileid``.
:param delete_on_gc: If true, then ``fileid`` will be deleted whenever this object gets garbage-collected. """ self._delete_on_gc = delete_on_gc StreamBackedCorpusView.__init__(self, fileid)
result = [] for i in range(self.BLOCK_SIZE): try: result.append(pickle.load(stream)) except EOFError: break return result
""" If ``delete_on_gc`` was set to true when this ``PickleCorpusView`` was created, then delete the corpus view's fileid. (This method is called whenever a ``PickledCorpusView`` is garbage-collected. """ if getattr(self, '_delete_on_gc'): if os.path.exists(self._fileid): try: os.remove(self._fileid) except (OSError, IOError): pass self.__dict__.clear() # make the garbage collector's job easier
def write(cls, sequence, output_file): if isinstance(output_file, compat.string_types): output_file = open(output_file, 'wb') for item in sequence: pickle.dump(item, output_file, cls.PROTOCOL)
""" Write the given sequence to a temporary file as a pickle corpus; and then return a ``PickleCorpusView`` view for that temporary corpus file.
:param delete_on_gc: If true, then the temporary file will be deleted whenever this object gets garbage-collected. """ try: fd, output_file_name = tempfile.mkstemp('.pcv', 'nltk-') output_file = os.fdopen(fd, 'wb') cls.write(sequence, output_file) output_file.close() return PickleCorpusView(output_file_name, delete_on_gc) except (OSError, IOError) as e: raise ValueError('Error while creating temp file: %s' % e)
###################################################################### #{ Block Readers ######################################################################
toks = [] for i in range(20): # Read 20 lines at a time. toks.extend(stream.readline().split()) return toks
toks = [] for i in range(20): # Read 20 lines at a time. toks.extend(wordpunct_tokenize(stream.readline())) return toks
toks = [] for i in range(20): line = stream.readline() if not line: return toks toks.append(line.rstrip('\n')) return toks
# End of file: else: return [] # Blank line: # Other line: else:
continue # End of file: if s: return [s] else: return [] # Other line: else:
""" Read a sequence of tokens from a stream, where tokens begin with lines that match ``start_re``. If ``end_re`` is specified, then tokens end with lines that match ``end_re``; otherwise, tokens end whenever the next line matching ``start_re`` or EOF is found. """ # Scan until we find a line matching the start regexp.
# Scan until we find another line matching the regexp, or EOF. # End of file: # End of token: # Start of new token: backup to just before it starts, and # return the token we've already collected. # Anything else is part of the token.
""" Read a sequence of s-expressions from the stream, and leave the stream's file position at the end the last complete s-expression read. This function will always return at least one s-expression, unless there are no more s-expressions in the file.
If the file ends in in the middle of an s-expression, then that incomplete s-expression is returned when the end of the file is reached.
:param block_size: The default block size for reading. If an s-expression is longer than one block, then more than one block will be read. :param comment_char: A character that marks comments. Any lines that begin with this character will be stripped out. (If spaces or tabs precede the comment character, then the line will not be stripped.) """ start = stream.tell() block = stream.read(block_size) encoding = getattr(stream, 'encoding', None) assert encoding is not None or isinstance(block, str) if encoding not in (None, 'utf-8'): import warnings warnings.warn('Parsing may fail, depending on the properties ' 'of the %s encoding!' % encoding) # (e.g., the utf-16 encoding does not work because it insists # on adding BOMs to the beginning of encoded strings.)
if comment_char: COMMENT = re.compile('(?m)^%s.*$' % re.escape(comment_char)) while True: try: # If we're stripping comments, then make sure our block ends # on a line boundary; and then replace any comments with # space characters. (We can't just strip them out -- that # would make our offset wrong.) if comment_char: block += stream.readline() block = re.sub(COMMENT, _sub_space, block) # Read the block. tokens, offset = _parse_sexpr_block(block) # Skip whitespace offset = re.compile(r'\s*').search(block, offset).end()
# Move to the end position. if encoding is None: stream.seek(start+offset) else: stream.seek(start+len(block[:offset].encode(encoding)))
# Return the list of tokens we processed return tokens except ValueError as e: if e.args[0] == 'Block too small': next_block = stream.read(block_size) if next_block: block += next_block continue else: # The file ended mid-sexpr -- return what we got. return [block.strip()] else: raise
"""Helper function: given a regexp match, return a string of spaces that's the same length as the matched string.""" return ' '*(m.end()-m.start())
tokens = [] start = end = 0
while end < len(block): m = re.compile(r'\S').search(block, end) if not m: return tokens, end
start = m.start()
# Case 1: sexpr is not parenthesized. if m.group() != '(': m2 = re.compile(r'[\s(]').search(block, start) if m2: end = m2.start() else: if tokens: return tokens, end raise ValueError('Block too small')
# Case 2: parenthesized sexpr. else: nesting = 0 for m in re.compile(r'[()]').finditer(block, start): if m.group()=='(': nesting += 1 else: nesting -= 1 if nesting == 0: end = m.end() break else: if tokens: return tokens, end raise ValueError('Block too small')
tokens.append(block[start:end])
return tokens, end
###################################################################### #{ Finding Corpus Items ######################################################################
raise TypeError('find_corpus_fileids: expected a PathPointer')
# Find fileids in a zipfile: scan the zipfile's namelist. Filter # out entries that end in '/' -- they're directories. if not name.endswith('/')]
# Find fileids in a directory: use os.walk to search all # subdirectories, and match paths against the regexp. if re.match(regexp, prefix+fileid)] # Don't visit svn directories:
else: raise AssertionError("Don't know how to handle %r" % root)
parent = os.path.split(parent)[0]
###################################################################### #{ Paragraph structure in Treebank files ######################################################################
# Read the next paragraph. para = '' while True: line = stream.readline() # End of paragraph: if re.match('======+\s*$', line): if para.strip(): return [para] # End of file: elif line == '': if para.strip(): return [para] else: return [] # Content line: else: para += line
|