{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "oL9KopJirB2g" }, "source": [ "##### Copyright 2018 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "SKaX3Hd3ra6C" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "AXH1bmUctMld" }, "source": [ "# Working with Unicode text\n", "\n", "\n", " \n", " \n", "
\n", " Run in Google Colab\n", " \n", " View source on GitHub\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "AXH1bmUctMld" }, "source": [ "> Note: This is an archived TF1 notebook. These are configured\n", "to run in TF2's \n", "[compatibility mode](https://www.tensorflow.org/guide/migrate)\n", "but will run in TF1 as well. To use TF1 in Colab, use the\n", "[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)\n", "magic." ] }, { "cell_type": "markdown", "metadata": { "id": "LrHJrKYis06U" }, "source": [ "## Introduction\n", "\n", "Models that process natural language often handle different languages with different character sets. *Unicode* is a standard encoding system that is used to represent character from almost all languages. Each character is encoded using a unique integer [code point](https://en.wikipedia.org/wiki/Code_point) between `0` and `0x10FFFF`. A *Unicode string* is a sequence of zero or more code points.\n", "\n", "This tutorial shows how to represent Unicode strings in TensorFlow and manipulate them using Unicode equivalents of standard string ops. It separates Unicode strings into tokens based on script detection." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OIKHl5Lvn4gh" }, "outputs": [], "source": [ "import tensorflow.compat.v1 as tf\n" ] }, { "cell_type": "markdown", "metadata": { "id": "n-LkcI-vtWNj" }, "source": [ "## The `tf.string` data type\n", "\n", "The basic TensorFlow `tf.string` `dtype` allows you to build tensors of byte strings.\n", "Unicode strings are utf-8 encoded by default." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3yo-Qv6ntaFr" }, "outputs": [], "source": [ "tf.constant(u\"Thanks 😊\")" ] }, { "cell_type": "markdown", "metadata": { "id": "2kA1ziG2tyCT" }, "source": [ "A `tf.string` tensor can hold byte strings of varying lengths because the byte strings are treated as atomic units. The string length is not included in the tensor dimensions.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eyINCmTztyyS" }, "outputs": [], "source": [ "tf.constant([u\"You're\", u\"welcome!\"]).shape" ] }, { "cell_type": "markdown", "metadata": { "id": "jsMPnjb6UDJ1" }, "source": [ "Note: When using python to construct strings, the handling of unicode differs between v2 and v3. In v2, unicode strings are indicated by the \"u\" prefix, as above. In v3, strings are unicode-encoded by default." ] }, { "cell_type": "markdown", "metadata": { "id": "hUFZ7B1Lk-uj" }, "source": [ "## Representing Unicode\n", "\n", "There are two standard ways to represent a Unicode string in TensorFlow:\n", "\n", "* `string` scalar — where the sequence of code points is encoded using a known [character encoding](https://en.wikipedia.org/wiki/Character_encoding).\n", "* `int32` vector — where each position contains a single code point.\n", "\n", "For example, the following three values all represent the Unicode string `\"语言处理\"` (which means \"language processing\" in Chinese):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cjQIkfJWvC_u" }, "outputs": [], "source": [ "# Unicode string, represented as a UTF-8 encoded string scalar.\n", "text_utf8 = tf.constant(u\"语言处理\")\n", "text_utf8" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yQqcUECcvF2r" }, "outputs": [], "source": [ "# Unicode string, represented as a UTF-16-BE encoded string scalar.\n", "text_utf16be = tf.constant(u\"语言处理\".encode(\"UTF-16-BE\"))\n", "text_utf16be" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ExdBr1t7vMuS" }, "outputs": [], "source": [ "# Unicode string, represented as a vector of Unicode code points.\n", "text_chars = tf.constant([ord(char) for char in u\"语言处理\"])\n", "text_chars" ] }, { "cell_type": "markdown", "metadata": { "id": "B8czv4JNpBnZ" }, "source": [ "### Converting between representations\n", "\n", "TensorFlow provides operations to convert between these different representations:\n", "\n", "* `tf.strings.unicode_decode`: Converts an encoded string scalar to a vector of code points.\n", "* `tf.strings.unicode_encode`: Converts a vector of code points to an encoded string scalar.\n", "* `tf.strings.unicode_transcode`: Converts an encoded string scalar to a different encoding." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qb-UQ_oLpAJg" }, "outputs": [], "source": [ "tf.strings.unicode_decode(text_utf8,\n", " input_encoding='UTF-8')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kEBUcunnp-9n" }, "outputs": [], "source": [ "tf.strings.unicode_encode(text_chars,\n", " output_encoding='UTF-8')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0MLhWcLZrph-" }, "outputs": [], "source": [ "tf.strings.unicode_transcode(text_utf8,\n", " input_encoding='UTF8',\n", " output_encoding='UTF-16-BE')" ] }, { "cell_type": "markdown", "metadata": { "id": "QVeLeVohqN7I" }, "source": [ "### Batch dimensions\n", "\n", "When decoding multiple strings, the number of characters in each string may not be equal. The return result is a [`tf.RaggedTensor`](../../guide/ragged_tensors.ipynb), where the length of the innermost dimension varies depending on the number of characters in each string:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "N2jVzPymr_Mm" }, "outputs": [], "source": [ "# A batch of Unicode strings, each represented as a UTF8-encoded string.\n", "batch_utf8 = [s.encode('UTF-8') for s in\n", " [u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']]\n", "batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,\n", " input_encoding='UTF-8')\n", "for sentence_chars in batch_chars_ragged.to_list():\n", " print(sentence_chars)" ] }, { "cell_type": "markdown", "metadata": { "id": "iRh3n1hPsJ9v" }, "source": [ "You can use this `tf.RaggedTensor` directly, or convert it to a dense `tf.Tensor` with padding or a `tf.SparseTensor` using the methods `tf.RaggedTensor.to_tensor` and `tf.RaggedTensor.to_sparse`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yz17yeSMsUid" }, "outputs": [], "source": [ "batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)\n", "print(batch_chars_padded.numpy())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kBjsPQp3rhfm" }, "outputs": [], "source": [ "batch_chars_sparse = batch_chars_ragged.to_sparse()" ] }, { "cell_type": "markdown", "metadata": { "id": "GCCkZh-nwlbL" }, "source": [ "When encoding multiple strings with the same lengths, a `tf.Tensor` may be used as input:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_lP62YUAwjK9" }, "outputs": [], "source": [ "tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],\n", " output_encoding='UTF-8')" ] }, { "cell_type": "markdown", "metadata": { "id": "w58CMRg9tamW" }, "source": [ "When encoding multiple strings with varyling length, a `tf.RaggedTensor` should be used as input:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "d7GtOtrltaMl" }, "outputs": [], "source": [ "tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')" ] }, { "cell_type": "markdown", "metadata": { "id": "T2Nh5Aj9xob3" }, "source": [ "If you have a tensor with multiple strings in padded or sparse format, then convert it to a `tf.RaggedTensor` before calling `unicode_encode`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "R2bYCYl0u-Ue" }, "outputs": [], "source": [ " tf.strings.unicode_encode(\n", " tf.RaggedTensor.from_sparse(batch_chars_sparse),\n", " output_encoding='UTF-8')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UlV2znh_u_zm" }, "outputs": [], "source": [ " tf.strings.unicode_encode(\n", " tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),\n", " output_encoding='UTF-8')" ] }, { "cell_type": "markdown", "metadata": { "id": "hQOOGkscvDpc" }, "source": [ "## Unicode operations" ] }, { "cell_type": "markdown", "metadata": { "id": "NkmtsA_yvMB0" }, "source": [ "### Character length\n", "\n", "The `tf.strings.length` operation has a parameter `unit`, which indicates how lengths should be computed. `unit` defaults to `\"BYTE\"`, but it can be set to other values, such as `\"UTF8_CHAR\"` or `\"UTF16_CHAR\"`, to determine the number of Unicode codepoints in each encoded `string`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1ZzMe59mvLHr" }, "outputs": [], "source": [ "# Note that the final character takes up 4 bytes in UTF8.\n", "thanks = u'Thanks 😊'.encode('UTF-8')\n", "num_bytes = tf.strings.length(thanks).numpy()\n", "num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()\n", "print('{} bytes; {} UTF-8 characters'.format(num_bytes, num_chars))" ] }, { "cell_type": "markdown", "metadata": { "id": "fHG85gxlvVU0" }, "source": [ "### Character substrings\n", "\n", "Similarly, the `tf.strings.substr` operation accepts the \"`unit`\" parameter, and uses it to determine what kind of offsets the \"`pos`\" and \"`len`\" parameters contain." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WlWRLV-4xWYq" }, "outputs": [], "source": [ "# default: unit='BYTE'. With len=1, we return a single byte.\n", "tf.strings.substr(thanks, pos=7, len=1).numpy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JfNUVDPwxkCS" }, "outputs": [], "source": [ "# Specifying unit='UTF8_CHAR', we return a single character, which in this case\n", "# is 4 bytes.\n", "print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())" ] }, { "cell_type": "markdown", "metadata": { "id": "zJUEsVSyeIa3" }, "source": [ "### Split Unicode strings\n", "\n", "The `tf.strings.unicode_split` operation splits unicode strings into substrings of individual characters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dDjkh5G1ejMt" }, "outputs": [], "source": [ "tf.strings.unicode_split(thanks, 'UTF-8').numpy()" ] }, { "cell_type": "markdown", "metadata": { "id": "HQqEEZEbdG9O" }, "source": [ "### Byte offsets for characters\n", "\n", "To align the character tensor generated by `tf.strings.unicode_decode` with the original string, it's useful to know the offset for where each character begins. The method `tf.strings.unicode_decode_with_offsets` is similar to `unicode_decode`, except that it returns a second tensor containing the start offset of each character." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Cug7cmwYdowd" }, "outputs": [], "source": [ "codepoints, offsets = tf.strings.unicode_decode_with_offsets(u\"🎈🎉🎊\", 'UTF-8')\n", "\n", "for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):\n", " print(\"At byte offset {}: codepoint {}\".format(offset, codepoint))" ] }, { "cell_type": "markdown", "metadata": { "id": "2ZnCNxOvx66T" }, "source": [ "## Unicode scripts" ] }, { "cell_type": "markdown", "metadata": { "id": "nRRHqkqNyGZ6" }, "source": [ "Each Unicode code point belongs to a single collection of codepoints known as a [script](https://en.wikipedia.org/wiki/Script_%28Unicode%29) . A character's script is helpful in determining which language the character might be in. For example, knowing that 'Б' is in Cyrillic script indicates that modern text containing that character is likely from a Slavic language such as Russian or Ukrainian.\n", "\n", "TensorFlow provides the `tf.strings.unicode_script` operation to determine which script a given codepoint uses. The script codes are `int32` values corresponding to [International Components for\n", "Unicode](http://site.icu-project.org/home) (ICU) [`UScriptCode`](http://icu-project.org/apiref/icu4c/uscript_8h.html) values.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "K7DeYHrRyFPy" }, "outputs": [], "source": [ "uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б']\n", "\n", "print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]" ] }, { "cell_type": "markdown", "metadata": { "id": "2fW992a1lIY6" }, "source": [ "The `tf.strings.unicode_script` operation can also be applied to multidimensional `tf.Tensor`s or `tf.RaggedTensor`s of codepoints:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uR7b8meLlFnp" }, "outputs": [], "source": [ "print(tf.strings.unicode_script(batch_chars_ragged))" ] }, { "cell_type": "markdown", "metadata": { "id": "mx7HEFpBzEsB" }, "source": [ "## Example: Simple segmentation\n", "\n", "Segmentation is the task of splitting text into word-like units. This is often easy when space characters are used to separate words, but some languages (like Chinese and Japanese) do not use spaces, and some languages (like German) contain long compounds that must be split in order to analyze their meaning. In web text, different languages and scripts are frequently mixed together, as in \"NY株価\" (New York Stock Exchange).\n", "\n", "We can perform very rough segmentation (without implementing any ML models) by using changes in script to approximate word boundaries. This will work for strings like the \"NY株価\" example above. It will also work for most languages that use spaces, as the space characters of various scripts are all classified as USCRIPT_COMMON, a special script code that differs from that of any actual text." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "grsvFiC4BoPb" }, "outputs": [], "source": [ "# dtype: string; shape: [num_sentences]\n", "#\n", "# The sentences to process. Edit this line to try out different inputs!\n", "sentence_texts = [u'Hello, world.', u'世界こんにちは']" ] }, { "cell_type": "markdown", "metadata": { "id": "CapnbShuGU8i" }, "source": [ "First, we decode the sentences into character codepoints, and find the script identifier for each character." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ReQVcDQh1MB8" }, "outputs": [], "source": [ "# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]\n", "#\n", "# sentence_char_codepoint[i, j] is the codepoint for the j'th character in\n", "# the i'th sentence.\n", "sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')\n", "print(sentence_char_codepoint)\n", "\n", "# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]\n", "#\n", "# sentence_char_scripts[i, j] is the unicode script of the j'th character in\n", "# the i'th sentence.\n", "sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)\n", "print(sentence_char_script)" ] }, { "cell_type": "markdown", "metadata": { "id": "O2fapF5UGcUc" }, "source": [ "Next, we use those script identifiers to determine where word boundaries should be added. We add a word boundary at the beginning of each sentence, and for each character whose script differs from the previous character:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7v5W6MOr1Rlc" }, "outputs": [], "source": [ "# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]\n", "#\n", "# sentence_char_starts_word[i, j] is True if the j'th character in the i'th\n", "# sentence is the start of a word.\n", "sentence_char_starts_word = tf.concat(\n", " [tf.fill([sentence_char_script.nrows(), 1], True),\n", " tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],\n", " axis=1)\n", "\n", "# dtype: int64; shape: [num_words]\n", "#\n", "# word_starts[i] is the index of the character that starts the i'th word (in\n", "# the flattened list of characters from all sentences).\n", "word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)\n", "print(word_starts)" ] }, { "cell_type": "markdown", "metadata": { "id": "LAwh-1QkGuC9" }, "source": [ "We can then use those start offsets to build a `RaggedTensor` containing the list of words from all batches:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bNiA1O_eBBCL" }, "outputs": [], "source": [ "# dtype: int32; shape: [num_words, (num_chars_per_word)]\n", "#\n", "# word_char_codepoint[i, j] is the codepoint for the j'th character in the\n", "# i'th word.\n", "word_char_codepoint = tf.RaggedTensor.from_row_starts(\n", " values=sentence_char_codepoint.values,\n", " row_starts=word_starts)\n", "print(word_char_codepoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "66a2ZnYmG2ao" }, "source": [ "And finally, we can segment the word codepoints `RaggedTensor` back into sentences:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NCfwcqLSEjZb" }, "outputs": [], "source": [ "# dtype: int64; shape: [num_sentences]\n", "#\n", "# sentence_num_words[i] is the number of words in the i'th sentence.\n", "sentence_num_words = tf.reduce_sum(\n", " tf.cast(sentence_char_starts_word, tf.int64),\n", " axis=1)\n", "\n", "# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]\n", "#\n", "# sentence_word_char_codepoint[i, j, k] is the codepoint for the k'th character\n", "# in the j'th word in the i'th sentence.\n", "sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(\n", " values=word_char_codepoint,\n", " row_lengths=sentence_num_words)\n", "print(sentence_word_char_codepoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "xWaX8WcbHyqY" }, "source": [ "To make the final result easier to read, we can encode it back into UTF-8 strings:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HSivquOgFr3C" }, "outputs": [], "source": [ "tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()" ] } ], "metadata": { "colab": { "collapsed_sections": [ "oL9KopJirB2g" ], "name": "unicode.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }