{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Creating Text-Fabric from GBI trees (XML nodes )\n", "The source data for the conversion are the XML node files representing the macula-greek version of the Nestle 1904 Greek New Testment. The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes. Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n", "\n", "The production of the Text-Fabric files consist of two steps. First the creation of piclke files (part 1). Secondly the actual TextFabric creation process (part 2). Both steps are independent allowing to start from Part 2 by using the pickle files as input. \n", "\n", "Be advised that this Text-Fabric version is a test version (proof of concept) and requires further finetuning, especialy with regards of nomenclature and presentation of (sub)phrases and clauses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of content \n", "* [Part 1: Read XML data and store in pickle](#first-bullet)\n", "* [Part 2: Nestle1904 production from pickle input](#second-bullet)\n", "* [Part 3: Testing the created textfabric data](#third-bullet)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Read XML data and store in pickle \n", "##### [back to TOC](#TOC)\n", "\n", "This script harvests all information from the GBI tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer). \n", "\n", "In the context of this script, 'Leaf' refers to those node containing the Greek word as data, which happen to be the nodes without any child (hence the analogy with the leaves on the tree). These 'leafs' can also be refered to as 'terminal nodes'. Futher, Parent1 is the leaf's parent, Parent2 is Parent1's parent, etc.\n", "\n", "For a full description of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Step 1: import various libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T02:58:14.739227Z", "start_time": "2022-10-28T02:57:38.766097Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import sys\n", "import os\n", "import time\n", "import pickle\n", "\n", "import re #regular expressions\n", "from os import listdir\n", "from os.path import isfile, join\n", "import xml.etree.ElementTree as ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: initialize global data\n", "\n", "Change BaseDir, InputDir and OutputDir to match location of the datalocation and the OS used." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "InputDir = BaseDir+'inputfiles\\\\'\n", "OutputDir = BaseDir+'outputfiles\\\\'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### step 3: define Function to add parent info to each node of the XML tree\n", "\n", "In order to traverse from the 'leafs' (terminating nodes) upto the root of the tree, it is required to add information to each node pointing to the parent of each node.\n", "\n", "(concept taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def addParentInfo(et):\n", " for child in et:\n", " child.attrib['parent'] = et\n", " addParentInfo(child)\n", "\n", "def getParent(et):\n", " if 'parent' in et.attrib:\n", " return et.attrib['parent']\n", " else:\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4: read and process the XML data and store panda dataframe in pickle" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Processing Matthew at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\01-matthew.xml\n", "......................................................................................................................................................................................\n", "Found 18299 items in 389.74775409698486 seconds\n", "\n", "Processing Mark at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\02-mark.xml\n", "................................................................................................................\n", "Found 11277 items in 167.02765321731567 seconds\n", "\n", "Processing Luke at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\03-luke.xml\n", "..................................................................................................................................................................................................\n", "Found 19456 items in 1250.1772944927216 seconds\n", "\n", "Processing John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\04-john.xml\n", "............................................................................................................................................................\n", "Found 15643 items in 280.0616319179535 seconds\n", "\n", "Processing Acts at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\05-acts.xml\n", ".......................................................................................................................................................................................\n", "Found 18393 items in 468.59965777397156 seconds\n", "\n", "Processing Romans at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\06-romans.xml\n", ".......................................................................\n", "Found 7100 items in 84.67976307868958 seconds\n", "\n", "Processing I_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\07-1corinthians.xml\n", "....................................................................\n", "Found 6820 items in 74.35686826705933 seconds\n", "\n", "Processing II_Corinthians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\08-2corinthians.xml\n", "............................................\n", "Found 4469 items in 44.4307804107666 seconds\n", "\n", "Processing Galatians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\09-galatians.xml\n", "......................\n", "Found 2228 items in 15.330809116363525 seconds\n", "\n", "Processing Ephesians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\10-ephesians.xml\n", "........................\n", "Found 2419 items in 17.31328582763672 seconds\n", "\n", "Processing Philippians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\11-philippians.xml\n", "................\n", "Found 1630 items in 8.315221309661865 seconds\n", "\n", "Processing Colossians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\12-colossians.xml\n", "...............\n", "Found 1575 items in 12.938243389129639 seconds\n", "\n", "Processing I_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\13-1thessalonians.xml\n", "..............\n", "Found 1473 items in 9.84698224067688 seconds\n", "\n", "Processing II_Thessalonians at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\14-2thessalonians.xml\n", "........\n", "Found 822 items in 5.0917510986328125 seconds\n", "\n", "Processing I_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\15-1timothy.xml\n", "...............\n", "Found 1588 items in 13.463085651397705 seconds\n", "\n", "Processing II_Timothy at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\16-2timothy.xml\n", "............\n", "Found 1237 items in 7.479506731033325 seconds\n", "\n", "Processing Titus at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\17-titus.xml\n", "......\n", "Found 658 items in 3.523249626159668 seconds\n", "\n", "Processing Philemon at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\18-philemon.xml\n", "...\n", "Found 335 items in 1.5144259929656982 seconds\n", "\n", "Processing Hebrews at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\19-hebrews.xml\n", ".................................................\n", "Found 4955 items in 50.09538650512695 seconds\n", "\n", "Processing James at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\20-james.xml\n", ".................\n", "Found 1739 items in 8.783202171325684 seconds\n", "\n", "Processing I_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\21-1peter.xml\n", "................\n", "Found 1676 items in 11.179571390151978 seconds\n", "\n", "Processing II_Peter at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\22-2peter.xml\n", "..........\n", "Found 1098 items in 6.439285516738892 seconds\n", "\n", "Processing I_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\23-1john.xml\n", ".....................\n", "Found 2136 items in 9.333310842514038 seconds\n", "\n", "Processing II_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\24-2john.xml\n", "..\n", "Found 245 items in 1.206688404083252 seconds\n", "\n", "Processing III_John at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\25-3john.xml\n", "..\n", "Found 219 items in 0.8371779918670654 seconds\n", "\n", "Processing Jude at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\26-jude.xml\n", "....\n", "Found 457 items in 1.7181646823883057 seconds\n", "\n", "Processing Revelation at C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\inputfiles\\27-revelation.xml\n", "..................................................................................................\n", "Found 9832 items in 137.9426236152649 seconds\n", "\n" ] } ], "source": [ "# set some globals\n", "monad=1\n", "CollectedItems= 0\n", "\n", "# process books in order\n", "for bo, bookinfo in bo2book.items():\n", " CollectedItems=0\n", " full_df=pd.DataFrame({})\n", " book_long=bookinfo[0]\n", " booknum=bookinfo[1]\n", " book_short=bookinfo[2]\n", " InputFile = os.path.join(InputDir, f'{bo}.xml')\n", " OutputFile = os.path.join(OutputDir, f'{bo}.pkl')\n", " print(f'Processing {book_long} at {InputFile}')\n", "\n", " # send xml document to parsing process\n", " tree = ET.parse(InputFile)\n", " # Now add all the parent info to the nodes in the xtree [important!]\n", " addParentInfo(tree.getroot())\n", " start_time = time.time()\n", " \n", " # walk over all the leaves and harvest the data\n", " for elem in tree.iter():\n", " if not list(elem):\n", " # if no child elements, this is a leaf/terminal node\n", " \n", " # show progress on screen\n", " CollectedItems+=1\n", " if (CollectedItems%100==0): print (\".\",end='')\n", " \n", " #Leafref will contain list with book, chapter verse and wordnumber\n", " Leafref = re.sub(r'[!: ]',\" \", elem.attrib.get('ref')).split()\n", " \n", " #push value for monad to element tree \n", " elem.set('monad', monad)\n", " monad+=1\n", " \n", " # add some important computed data to the leaf\n", " elem.set('LeafName', elem.tag)\n", " elem.set('word', elem.text)\n", " elem.set('book_long', book_long)\n", " elem.set('booknum', int(booknum))\n", " elem.set('book_short', book_short)\n", " elem.set('chapter', int(Leafref[1]))\n", " elem.set('verse', int(Leafref[2]))\n", " \n", " # folling code will trace down parents upto the tree and store found attributes\n", " parentnode=getParent(elem)\n", " index=0\n", " while (parentnode):\n", " index+=1\n", " elem.set('Parent{}Name'.format(index), parentnode.tag)\n", " elem.set('Parent{}Type'.format(index), parentnode.attrib.get('Type'))\n", " elem.set('Parent{}Cat'.format(index), parentnode.attrib.get('Cat'))\n", " elem.set('Parent{}Start'.format(index), parentnode.attrib.get('Start'))\n", " elem.set('Parent{}End'.format(index), parentnode.attrib.get('End'))\n", " elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('Rule'))\n", " elem.set('Parent{}Head'.format(index), parentnode.attrib.get('Head'))\n", " elem.set('Parent{}NodeId'.format(index),parentnode.attrib.get('nodeId'))\n", " elem.set('Parent{}ClType'.format(index),parentnode.attrib.get('ClType'))\n", " elem.set('Parent{}HasDet'.format(index),parentnode.attrib.get('HasDet'))\n", " currentnode=parentnode\n", " parentnode=getParent(currentnode) \n", " elem.set('parents', int(index))\n", " \n", " #this will push all elements found in the tree into a DataFrame\n", " df=pd.DataFrame(elem.attrib, index={monad})\n", " full_df=pd.concat([full_df,df])\n", " \n", " #store the resulting DataFrame per book into a pickle file for further processing\n", " df = df.convert_dtypes(convert_string=True)\n", " output = open(r\"{}\".format(OutputFile), 'wb')\n", " pickle.dump(full_df, output)\n", " output.close()\n", " print(\"\\nFound \",CollectedItems, \" items in %s seconds\\n\" % (time.time() - start_time)) \n", " " ] }, { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "## Part 2: Nestle1904 TextFabric production from pickle input \n", "##### [back to TOC](#TOC)\n", "\n", "This script creates the TextFabric files by recursive calling the TF walker function.\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The pickle files created by step 1 are stored on Github location https://github.com/tonyjurg/NA1904/tree/main/resources/picklefiles" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Load libraries and initialize some data\n", "\n", "Change BaseDir, InputDir and OutputDir to match location of the datalocation and the OS used." ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T03:01:34.810259Z", "start_time": "2022-10-28T03:01:25.745112Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import re\n", "import gc\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "from tf.parameters import VERSION\n", "from datetime import date\n", "import pickle\n", "\n", "\n", "BaseDir = 'C:\\\\Users\\\\tonyj\\\\my_new_Jupyter_folder\\\\test_of_xml_etree\\\\'\n", "source_dir = BaseDir+'outputfiles\\\\' #the input for the walker is the output of the xml to excel \n", "output_dir = BaseDir+'outputfilesTF\\\\' #the TextFabric files\n", "output_dir = 'C:\\\\text-fabric-data\\\\github\\\\tjurg\\\\NA1904\\\\tf\\\\1904'\n", "\n", "# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Running the TF walker function\n", "\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The logic of interpreting the data is included in the director function." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.2.3\n", "44 features found and 0 ignored\n", " 0.00s Importing data from walking through the source ...\n", " | 0.00s Preparing metadata... \n", " | SECTION TYPES: book, chapter, verse\n", " | SECTION FEATURES: book, chapter, verse\n", " | STRUCTURE TYPES: book, chapter, verse\n", " | STRUCTURE FEATURES: book, chapter, verse\n", " | TEXT FEATURES:\n", " | | text-orig-full word\n", " | 0.00s OK\n", " | 0.00s Following director... \n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\01-matthew.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\02-mark.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\03-luke.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\04-john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\05-acts.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\06-romans.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\07-1corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\08-2corinthians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\09-galatians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\10-ephesians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\11-philippians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\12-colossians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\13-1thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\14-2thessalonians.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\15-1timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\16-2timothy.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\17-titus.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\18-philemon.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\19-hebrews.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\20-james.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\21-1peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\22-2peter.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\23-1john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\24-2john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\25-3john.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\26-jude.pkl...\n", "\tloading C:\\Users\\tonyj\\my_new_Jupyter_folder\\test_of_xml_etree\\outputfiles\\27-revelation.pkl...\n", " | 35s \"edge\" actions: 0\n", " | 35s \"feature\" actions: 244296\n", " | 35s \"node\" actions: 106490\n", " | 35s \"resume\" actions: 0\n", " | 35s \"slot\" actions: 137779\n", " | 35s \"terminate\" actions: 244269\n", " | 27 x \"book\" node \n", " | 260 x \"chapter\" node \n", " | 16124 x \"clause\" node \n", " | 76415 x \"phrase\" node \n", " | 5720 x \"sentence\" node \n", " | 7944 x \"verse\" node \n", " | 137779 x \"word\" node = slot type\n", " | 244269 nodes of all types\n", " | 35s OK\n", " | 0.00s checking for nodes and edges ... \n", " | 0.00s OK\n", " | 0.00s checking (section) features ... \n", " | 0.22s OK\n", " | 0.00s reordering nodes ...\n", " | 0.03s Sorting 27 nodes of type \"book\"\n", " | 0.04s Sorting 260 nodes of type \"chapter\"\n", " | 0.05s Sorting 16124 nodes of type \"clause\"\n", " | 0.08s Sorting 76415 nodes of type \"phrase\"\n", " | 0.17s Sorting 5720 nodes of type \"sentence\"\n", " | 0.20s Sorting 7944 nodes of type \"verse\"\n", " | 0.22s Max node = 244269\n", " | 0.22s OK\n", " | 0.00s reassigning feature values ...\n", " | | 0.00s node feature \"book\" with 27 nodes\n", " | | 0.00s node feature \"book_long\" with 137779 nodes\n", " | | 0.04s node feature \"book_short\" with 137779 nodes\n", " | | 0.09s node feature \"booknum\" with 137779 nodes\n", " | | 0.13s node feature \"case\" with 137779 nodes\n", " | | 0.18s node feature \"chapter\" with 138039 nodes\n", " | | 0.22s node feature \"clause\" with 153903 nodes\n", " | | 0.27s node feature \"clausetype\" with 16124 nodes\n", " | | 0.27s node feature \"degree\" with 137779 nodes\n", " | | 0.32s node feature \"formaltag\" with 137779 nodes\n", " | | 0.36s node feature \"functionaltag\" with 137779 nodes\n", " | | 0.40s node feature \"gloss_EN\" with 137779 nodes\n", " | | 0.43s node feature \"gn\" with 137779 nodes\n", " | | 0.48s node feature \"lemma\" with 137779 nodes\n", " | | 0.53s node feature \"lex_dom\" with 137779 nodes\n", " | | 0.57s node feature \"ln\" with 137779 nodes\n", " | | 0.61s node feature \"monad\" with 137779 nodes\n", " | | 0.65s node feature \"mood\" with 137779 nodes\n", " | | 0.69s node feature \"nodeID\" with 137779 nodes\n", " | | 0.73s node feature \"normalized\" with 137779 nodes\n", " | | 0.77s node feature \"nu\" with 137779 nodes\n", " | | 0.81s node feature \"number\" with 137779 nodes\n", " | | 0.86s node feature \"orig_order\" with 137779 nodes\n", " | | 0.89s node feature \"person\" with 137779 nodes\n", " | | 0.94s node feature \"phrase\" with 214194 nodes\n", " | | 1.00s node feature \"phrasefunction\" with 76415 nodes\n", " | | 1.03s node feature \"phrasefunction_long\" with 76415 nodes\n", " | | 1.06s node feature \"phrasetype\" with 76415 nodes\n", " | | 1.10s node feature \"reference\" with 137779 nodes\n", " | | 1.15s node feature \"sentence\" with 143499 nodes\n", " | | 1.20s node feature \"sp\" with 137779 nodes\n", " | | 1.24s node feature \"sp_full\" with 137779 nodes\n", " | | 1.29s node feature \"strongs\" with 137779 nodes\n", " | | 1.34s node feature \"subj_ref\" with 137779 nodes\n", " | | 1.38s node feature \"tense\" with 137779 nodes\n", " | | 1.43s node feature \"type\" with 137779 nodes\n", " | | 1.47s node feature \"verse\" with 145723 nodes\n", " | | 1.52s node feature \"voice\" with 137779 nodes\n", " | | 1.56s node feature \"word\" with 137779 nodes\n", " | 1.68s OK\n", " 0.00s Exporting 40 node and 1 edge and 1 config features to C:/text-fabric-data/github/tjurg/NA1904/tf/1904:\n", " 0.00s VALIDATING oslots feature\n", " 0.02s VALIDATING oslots feature\n", " 0.02s maxSlot= 137779\n", " 0.02s maxNode= 244269\n", " 0.03s OK: oslots is valid\n", " | 0.01s T book to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T book_long to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.13s T book_short to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.13s T booknum to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.16s T case to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T chapter to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.16s T clause to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.03s T clausetype to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.16s T degree to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T formaltag to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T functionaltag to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T gloss_EN to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T gn to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.16s T lemma to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T lex_dom to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T ln to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.13s T monad to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.13s T mood to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T nodeID to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.17s T normalized to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T nu to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T number to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T orig_order to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.06s T otype to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T person to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.20s T phrase to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.08s T phrasefunction to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.08s T phrasefunction_long to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.09s T phrasetype to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T reference to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T sentence to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T sp to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T sp_full to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T strongs to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.15s T subj_ref to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T tense to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T type to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T verse to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.14s T voice to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.17s T word to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.32s T oslots to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.00s M otext to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " 5.70s Exported 40 node features and 1 edge features and 1 config features to C:/text-fabric-data/github/tjurg/NA1904/tf/1904\n", "done\n" ] } ], "source": [ "TF = Fabric(locations=output_dir, silent=False)\n", "cv = CV(TF)\n", "version = \"0.1 (Initial)\"\n", "\n", "def sanitize(input):\n", " if isinstance(input, float): return ''\n", " else: return (input)\n", " \n", "\n", "\n", "def director(cv):\n", " \n", " \n", " NoneType = type(None) # needed as tool to validate certain data\n", " prev_book = \"Matthew\" # start at first book\n", " IndexDict = {} # init an empty dictionary\n", "\n", " for bo,bookinfo in bo2book.items():\n", " \n", " '''\n", " load all data into a dataframe\n", " process books in order (bookinfo is a list!)\n", " ''' \n", " book=bookinfo[0] \n", " booknum=int(bookinfo[1])\n", " book_short=bookinfo[2]\n", " book_loc = os.path.join(source_dir, f'{bo}.pkl') \n", " \n", " print(f'\\tloading {book_loc}...')\n", " pkl_file = open(book_loc, 'rb')\n", " df = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " \n", " FoundWords=0\n", " phrasefunction='TBD'\n", " phrasefunction_long='TBD'\n", " this_clausetype=\"unknown\" #just signal a not found case\n", " this_clauserule=\"unknown\"\n", " phrasetype=\"unknown\" #just signal a not found case\n", " \n", " \n", " prev_chapter = int(1) # start at 1\n", " prev_verse = int(1) # start at 1\n", " prev_sentence = int(1) # start at 1\n", " prev_clause = int(1) # start at 1\n", " prev_phrase = int(1) # start at 1\n", " \n", " # reset/load the following initial variables (we are at the start of a new book)\n", " sentence_track = 1\n", " sentence_done = False\n", " clause_track = 1\n", " clause_done = False\n", " phrase_track = 1\n", " phrase_done = False\n", " verse_done=False\n", " chapter_done = False\n", " book_done=False\n", " \n", " wrdnum = 0 # start at 0\n", "\n", " # fill dictionary of column names for this book \n", " ItemsInRow=1\n", " for itemname in df.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n", " ItemsInRow+=1\n", " \n", " \n", " '''\n", " Walks through the texts and triggers\n", " slot and node creation events.\n", " '''\n", " \n", " # iterate through words and construct objects\n", " for row in df.itertuples():\n", " wrdnum += 1\n", " FoundWords +=1\n", " \n", " \n", " '''\n", " First get all the relevant information from the dataframe\n", " ''' \n", " \n", " # get number of parent nodes\n", " parents = row[IndexDict.get(\"i_parents\")]\n", " \n", " # get chapter and verse from the data\n", " chapter = row[IndexDict.get(\"i_chapter\")]\n", " verse = row[IndexDict.get(\"i_verse\")]\n", " \n", " \n", " # get clause type info\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"CL\":\n", " clauseparent=i\n", " prev_clausetype=this_clausetype\n", " _rule=\"i_Parent{}Rule\".format(i)\n", " this_clausetype=row[IndexDict.get(_rule)]\n", " \n", " \n", " \n", " # get phrase type info\n", " prev_phrasetype=phrasetype\n", " for i in range(1,parents-1):\n", " item = IndexDict.get(\"i_Parent{}Cat\".format(i))\n", " if row[item]==\"np\":\n", " _item =\"i_Parent{}Rule\".format(i)\n", " phrasetype=row[IndexDict.get(_item)]\n", " break\n", " functionaltag=row[IndexDict.get('i_FunctionalTag')]\n", " \n", "\n", " \n", " '''\n", " determine if conditions are met to trigger some action \n", " action will be executed after next word\n", " ''' \n", " \n", " # detect book boundary\n", " if prev_book != book:\n", " prev_book=book\n", " book_done = True\n", " chapter_done = True\n", " verse_done=True\n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", "\n", " # detect chapter boundary\n", " if prev_chapter != chapter:\n", " chapter_done = True\n", " verse_done=True\n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " \n", " # detect verse boundary\n", " if prev_verse != verse:\n", " verse_done=True\n", " \n", "\n", " \n", " # determine syntactic categories of clause parts. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 5&6\n", " # (section 2.4 Syntactic Categories at Clause Level)\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunction_long=phrasefunction_long\n", " phrase_done = False\n", " for i in range(1,clauseparent): \n", " phrasefunction = row[IndexDict.get(\"i_Parent{}Cat\".format(i))] \n", " if phrasefunction==\"ADV\":\n", " phrasefunction_long='Adverbial function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"IO\":\n", " phrasefunction_long='Indirect Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"O\":\n", " phrasefunction_long='Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"O2\":\n", " phrasefunction_long='Second Object function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"S\":\n", " phrasefunction_long='Subject function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction=='P':\n", " phrasefunction_long='Predicate function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"V\":\n", " phrasefunction_long='Verbal function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", " elif phrasefunction==\"VC\":\n", " phrasefunction_long='Verbal Copula function'\n", " if prev_phrasefunction!=phrasefunction: phrase_done = True\n", " break\n", "\n", "\n", " # determine syntactic categories at word level. See also the description in \n", " # \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 6&7\n", " # (2.2. Syntactic Categories at Word Level: Part of Speech Labels)\n", " sp=sanitize(row[IndexDict.get(\"i_Cat\")])\n", " if sp=='adj':\n", " sp_full='adjective'\n", " elif sp=='adj':\n", " sp_full='adjective'\n", " elif sp=='conj':\n", " sp_full='conjunction'\n", " elif sp=='det':\n", " sp_full='determiner' \n", " elif sp=='intj':\n", " sp_full='interjection' \n", " elif sp=='noun':\n", " sp_full='noun' \n", " elif sp=='num':\n", " sp_full='numeral' \n", " elif sp=='prep':\n", " sp_full='preposition' \n", " elif sp=='ptcl':\n", " sp_full='particle' \n", " elif sp=='pron':\n", " sp_full='pronoun' \n", " elif sp=='verb':\n", " sp_full='verb' \n", " \n", " \n", " # Manage first word per book\n", " if wrdnum==1: \n", " prev_phrasetype=phrasetype\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunction_long=phrasefunction_long\n", " book_done = False\n", " chapter_done = False\n", " verse_done = False\n", " phrase_done = False\n", " clause_done = False\n", " sentence_done = False\n", " # create the first set of nodes\n", " this_book = cv.node('book')\n", " cv.feature(this_book, book=prev_book)\n", " this_chapter = cv.node('chapter')\n", " this_verse = cv.node('verse')\n", " this_sentence = cv.node('sentence')\n", " this_clause = cv.node('clause')\n", " this_phrase = cv.node('phrase')\n", " sentence_track += 1\n", " clause_track += 1\n", " phrase_track += 1\n", "\n", " \n", " \n", " '''\n", " -- handle TF events --\n", " Determine what actions need to be done if proper condition is met.\n", " ''' \n", "\n", " # act upon end of phrase (close)\n", " if phrase_done or clause_done:\n", " cv.feature(this_phrase, phrase=prev_phrase, phrasetype=prev_phrasetype, phrasefunction=prev_phrasefunction, phrasefunction_long=prev_phrasefunction_long)\n", " cv.terminate(this_phrase)\n", " \n", " # act upon end of clause (close) \n", " if clause_done:\n", " cv.feature(this_clause, clause=prev_clause, clausetype=prev_clausetype)\n", " cv.terminate(this_clause)\n", " \n", " # act upon end of sentence (close)\n", " if sentence_done:\n", " cv.feature(this_sentence, sentence=prev_sentence)\n", " cv.terminate(this_sentence)\n", " \n", " # act upon end of verse (close)\n", " if verse_done:\n", " cv.feature(this_verse, verse=prev_verse)\n", " cv.terminate(this_verse)\n", " prev_verse = verse \n", "\n", " # act upon end of chapter (close)\n", " if chapter_done:\n", " cv.feature(this_chapter, chapter=prev_chapter)\n", " cv.terminate(this_chapter)\n", " prev_chapter = chapter\n", "\n", " # act upon end of book (close and open new)\n", " if book_done:\n", " cv.terminate(this_book)\n", " this_book = cv.node('book')\n", " cv.feature(this_book, book=book) \n", " prev_book = book\n", " wrdnum = 1\n", " phrase_track = 1\n", " clause_track = 1\n", " sentence_track = 1\n", " book_done = False\n", " \n", " # start of chapter (create new)\n", " if chapter_done:\n", " this_chapter = cv.node('chapter')\n", " chapter_done = False\n", " \n", " # start of verse (create new)\n", " if verse_done:\n", " this_verse = cv.node('verse')\n", " verse_done = False \n", " \n", " # start of sentence (create new)\n", " if sentence_done:\n", " this_sentence= cv.node('sentence')\n", " prev_sentence = sentence_track\n", " sentence_track += 1\n", " sentence_done = False\n", "\n", " \n", " # start of clause (create new) \n", " if clause_done:\n", " this_clause = cv.node('clause')\n", " prev_clause = clause_track\n", " clause_track += 1\n", " clause_done = False\n", " phrase_done = True \n", "\n", " \n", " # start of phrase (create new)\n", " if phrase_done:\n", " this_phrase = cv.node('phrase')\n", " prev_phrase = phrase_track\n", " prev_phrasefunction=phrasefunction\n", " prev_phrasefunction_long=phrasefunction_long\n", " phrase_track += 1\n", " phrase_done = False\n", " \n", " \n", " # Detect boundaries of sentences, clauses and phrases \n", " text=row[IndexDict.get(\"i_Unicode\")]\n", " if text[-1:] == \".\" : \n", " sentence_done = True\n", " clause_done = True\n", " phrase_done = True\n", " if text[-1:] == \";\" or text[-1:] == \",\":\n", " clause_done = True\n", " phrase_done = True \n", " \n", " \n", " '''\n", " -- create word nodes --\n", " ''' \n", " \n", " # some attributes are not present inside some (small) books. The following is to prevent exceptions.\n", " degree='' \n", " if 'i_Degree' in IndexDict: \n", " degree=sanitize(row[IndexDict.get(\"i_Degree\")]) \n", " subjref=''\n", " if 'i_SubjRef' in IndexDict:\n", " subjref=sanitize(row[IndexDict.get(\"i_SubjRef\")]) \n", " \n", "\n", " # make word object\n", " this_word = cv.slot()\n", " cv.feature(this_word, \n", " word=row[IndexDict.get(\"i_Unicode\")],\n", " monad=row[IndexDict.get(\"i_monad\")],\n", " orig_order=row[IndexDict.get(\"i_monad\")],\n", " book_long=row[IndexDict.get(\"i_book_long\")],\n", " booknum=booknum,\n", " book_short=row[IndexDict.get(\"i_book_short\")],\n", " chapter=chapter,\n", " sp=sp,\n", " sp_full=sp_full,\n", " verse=verse,\n", " sentence=prev_sentence,\n", " clause=prev_clause,\n", " phrase=prev_phrase,\n", " normalized=sanitize(row[IndexDict.get(\"i_NormalizedForm\")]),\n", " formaltag=sanitize(row[IndexDict.get(\"i_FormalTag\")]),\n", " functionaltag=functionaltag,\n", " strongs=sanitize(row[IndexDict.get(\"i_StrongNumber\")]),\n", " lex_dom=sanitize(row[IndexDict.get(\"i_LexDomain\")]),\n", " ln=sanitize(row[IndexDict.get(\"i_LN\")]),\n", " gloss_EN=sanitize(row[IndexDict.get(\"i_Gloss\")]),\n", " gn=sanitize(row[IndexDict.get(\"i_Gender\")]),\n", " nu=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " case=sanitize(row[IndexDict.get(\"i_Case\")]),\n", " lemma=sanitize(row[IndexDict.get(\"i_UnicodeLemma\")]),\n", " person=sanitize(row[IndexDict.get(\"i_Person\")]),\n", " mood=sanitize(row[IndexDict.get(\"i_Mood\")]),\n", " tense=sanitize(row[IndexDict.get(\"i_Tense\")]),\n", " number=sanitize(row[IndexDict.get(\"i_Number\")]),\n", " voice=sanitize(row[IndexDict.get(\"i_Voice\")]),\n", " degree=degree,\n", " type=sanitize(row[IndexDict.get(\"i_Type\")]),\n", " reference=sanitize(row[IndexDict.get(\"i_Ref\")]), # the capital R is critical here!\n", " subj_ref=subjref,\n", " nodeID=row[1] #this is a fixed position.\n", " )\n", " cv.terminate(this_word)\n", "\n", " \n", " '''\n", " -- wrap up the book --\n", " ''' \n", " \n", " # close all nodes (phrase, clause, sentence, verse, chapter and book)\n", " cv.feature(this_phrase, phrase=phrase_track, phrasetype=prev_phrasetype,phrasefunction=prev_phrasefunction,phrasefunction_long=prev_phrasefunction_long)\n", " cv.terminate(this_phrase)\n", " cv.feature(this_clause, clause=prev_clause, clausetype=prev_clausetype)\n", " cv.terminate(this_clause)\n", " cv.feature(this_sentence, sentence=prev_sentence)\n", " cv.terminate(this_sentence)\n", " cv.feature(this_verse, verse=prev_verse)\n", " cv.terminate(this_verse)\n", " cv.feature(this_chapter, chapter=prev_chapter)\n", " cv.terminate(this_chapter)\n", " cv.feature(this_book, book=prev_book)\n", " cv.terminate(this_book)\n", " \n", " # clear dataframe for this book \n", " del df\n", " # clear the index dictionary\n", " IndexDict.clear()\n", " gc.collect()\n", " \n", " \n", "'''\n", "-- output definitions --\n", "''' \n", " \n", "slotType = 'word' # or whatever you choose\n", "otext = { # dictionary of config data for sections and text formats\n", " 'fmt:text-orig-full':'{word}',\n", " 'sectionTypes':'book,chapter,verse',\n", " 'sectionFeatures':'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " }\n", "\n", "# configure metadata\n", "generic = { # dictionary of metadata meant for all features\n", " 'Name': 'Greek New Testament (NA1904)',\n", " 'Version': '1904',\n", " 'Editors': 'Nestle & Aland',\n", " 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n", " 'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)', \n", " 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', \n", " 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', \n", " 'Convertor_source': 'https://github.com/tonyjurg/NA1904/tree/main/resources/converter',\n", " 'Converter_version': '{}'.format(version),\n", " 'TextFabric version': '{}'.format(VERSION) #imported from tf.parameters\n", " }\n", "\n", "intFeatures = { # set of integer valued feature names\n", " 'booknum',\n", " 'chapter',\n", " 'verse',\n", " 'sentence',\n", " 'clause',\n", " 'phrase',\n", " 'orig_order',\n", " 'monad'\n", " }\n", "\n", "featureMeta = { # per feature dicts with metadata\n", " 'book': {'description': 'Book'},\n", " 'book_long': {'description': 'Book name (fully spelled out)'},\n", " 'booknum': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n", " 'book_short': {'description': 'Book name (abbreviated)'},\n", " 'chapter': {'description': 'Chapter number inside book'},\n", " 'verse': {'description': 'Verse number inside chapter'},\n", " 'sentence': {'description': 'Sentence number (counted per chapter)'},\n", " 'clause': {'description': 'Clause number (counted per chapter)'},\n", " 'clausetype' : {'description': 'Clause type information (verb, verbless, elided, minor, etc.)'},\n", " 'phrase' : {'description': 'Phrase number (counted per chapter)'},\n", " 'phrasetype' : {'description': 'Phrase type information'},\n", " 'phrasefunction' : {'description': 'Phrase function (abbreviated)'},\n", " 'phrasefunction_long' : {'description': 'Phrase function (long description)'},\n", " 'orig_order': {'description': 'Word order within corpus'},\n", " 'monad':{'description': 'Monad'},\n", " 'word': {'description': 'Word as it appears in the text'},\n", " 'sp': {'description': 'Part of Speech (abbreviated)'},\n", " 'sp_full': {'description': 'Part of Speech (long description)'}, \n", " 'normalized': {'description': 'Surface word stripped of punctations'},\n", " 'lemma': {'description': 'Lexeme (lemma)'},\n", " 'formaltag': {'description': 'Formal tag (Sandborg-Petersen morphology)'},\n", " 'functionaltag': {'description': 'Functional tag (Sandborg-Petersen morphology)'},\n", " # see also discussion on relation between lex_dom and ln @ https://github.com/Clear-Bible/macula-greek/issues/29\n", " 'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},\n", " 'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},\n", " 'strongs': {'description': 'Strongs number'},\n", " 'gloss_EN': {'description': 'English gloss'},\n", " 'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},\n", " 'nu': {'description': 'Gramatical number (Singular, Plural)'},\n", " 'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n", " 'person': {'description': 'Gramatical person of the verb (first, second, third)'},\n", " 'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},\n", " 'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},\n", " 'number': {'description': 'Gramatical number of the verb'},\n", " 'voice': {'description': 'Gramatical voice of the verb'},\n", " 'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},\n", " 'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n", " 'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'}\n", " }\n", "\n", "'''\n", " -- the main function --\n", "''' \n", "\n", "good = cv.walk(\n", " director,\n", " slotType,\n", " otext=otext,\n", " generic=generic,\n", " intFeatures=intFeatures,\n", " featureMeta=featureMeta,\n", " warn=True,\n", " force=False\n", ")\n", "\n", "if good:\n", " print (\"done\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [], "toc": true }, "source": [ "## Part 3: Testing the created textfabric data \n", "##### [back to TOC](#TOC)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Step 1 load the TF data\n", "\n", "The TF will be loaded from local copy of github repository" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2022-10-21T02:32:54.197994Z", "start_time": "2022-10-21T02:32:53.217806Z" } }, "outputs": [], "source": [ "# First, I have to laod different modules that I use for analyzing the data and for plotting:\n", "import sys, os, collections\n", "import pandas as pd\n", "import numpy as np\n", "import re\n", "\n", "\n", "from tf.fabric import Fabric\n", "from tf.app import use\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell loads the TextFabric files from a local disc.\n", "Change accordingly. " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "ExecuteTime": { "end_time": "2022-10-21T02:32:55.906200Z", "start_time": "2022-10-21T02:32:55.012231Z" } }, "outputs": [ { "data": { "text/markdown": [ "**Locating corpus resources ...**" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "app: ~/text-fabric-data/github/tjurg/NA1904/app" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "data: ~/text-fabric-data/github/tjurg/NA1904/tf/1904" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ " | 0.24s T otype from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 1.94s T oslots from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.52s T verse from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.48s T chapter from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.00s T book from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.67s T word from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | | 0.05s C __levels__ from otype, oslots, otext\n", " | | 1.48s C __order__ from otype, oslots, __levels__\n", " | | 0.07s C __rank__ from otype, __order__\n", " | | 2.40s C __levUp__ from otype, oslots, __rank__\n", " | | 1.53s C __levDown__ from otype, __levUp__, __rank__\n", " | | 0.05s C __characters__ from otext\n", " | | 0.99s C __boundary__ from otype, oslots, __rank__\n", " | | 0.04s C __sections__ from otype, oslots, otext, __levUp__, __levels__, book, chapter, verse\n", " | | 0.24s C __structure__ from otype, oslots, otext, __rank__, __levUp__, book, chapter, verse\n", " | 0.56s T book_long from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.56s T book_short from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.47s T booknum from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.53s T case from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.54s T clause from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.07s T clausetype from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.47s T degree from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.58s T formaltag from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.57s T functionaltag from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.62s T gloss_EN from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.53s T gn from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.61s T lemma from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.59s T lex_dom from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.60s T ln from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T monad from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.51s T mood from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.72s T nodeID from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.66s T normalized from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.54s T nu from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.55s T number from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T orig_order from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T person from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.79s T phrase from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.31s T phrasefunction from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.32s T phrasefunction_long from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.32s T phrasetype from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.51s T sentence from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.57s T sp from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.57s T sp_full from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.59s T strongs from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T subj_ref from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T tense from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.52s T type from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n", " | 0.50s T voice from ~/text-fabric-data/github/tjurg/NA1904/tf/1904\n" ] }, { "data": { "text/html": [ "\n", " Text-Fabric: Text-Fabric API 11.2.3, tjurg/NA1904/app v3, Search Reference
\n", " Data: tjurg - NA1904 1904, Character table, Feature docs
\n", "
Node types\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
Name# of nodes# slots/node% coverage
book275102.93100
chapter260529.92100
sentence572024.09100
verse794417.34100
clause161248.54100
phrase764151.80100
word1377791.00100
\n", " Sets: no custom sets
\n", " Features:
\n", "
NA 1904\n", "
\n", "\n", "
\n", "
\n", "book\n", "
\n", "
str
\n", "\n", " Book\n", "\n", "
\n", "\n", "
\n", "
\n", "book_long\n", "
\n", "
str
\n", "\n", " Book name (fully spelled out)\n", "\n", "
\n", "\n", "
\n", "
\n", "book_short\n", "
\n", "
str
\n", "\n", " Book name (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "booknum\n", "
\n", "
int
\n", "\n", " NT book number (Matthew=1, Mark=2, ..., Revelation=27)\n", "\n", "
\n", "\n", "
\n", "
\n", "case\n", "
\n", "
str
\n", "\n", " Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)\n", "\n", "
\n", "\n", "
\n", "
\n", "chapter\n", "
\n", "
int
\n", "\n", " Chapter number inside book\n", "\n", "
\n", "\n", "
\n", "
\n", "clause\n", "
\n", "
int
\n", "\n", " Clause number (counted per chapter)\n", "\n", "
\n", "\n", "
\n", "
\n", "clauserule\n", "
\n", "
str
\n", "\n", " Clause rule information\n", "\n", "
\n", "\n", "
\n", "
\n", "clausetype\n", "
\n", "
str
\n", "\n", " Clause type information (verb, verbless, elided, minor, etc.)\n", "\n", "
\n", "\n", "
\n", "
\n", "degree\n", "
\n", "
str
\n", "\n", " Degree (e.g. Comparitative, Superlative)\n", "\n", "
\n", "\n", "
\n", "
\n", "formaltag\n", "
\n", "
str
\n", "\n", " Formal tag (Sandborg-Petersen morphology)\n", "\n", "
\n", "\n", "
\n", "
\n", "functionaltag\n", "
\n", "
str
\n", "\n", " Functional tag (Sandborg-Petersen morphology)\n", "\n", "
\n", "\n", "
\n", "
\n", "gloss_EN\n", "
\n", "
str
\n", "\n", " English gloss\n", "\n", "
\n", "\n", "
\n", "
\n", "gn\n", "
\n", "
str
\n", "\n", " Gramatical gender (Masculine, Feminine, Neuter)\n", "\n", "
\n", "\n", "
\n", "
\n", "lemma\n", "
\n", "
str
\n", "\n", " Lexeme (lemma)\n", "\n", "
\n", "\n", "
\n", "
\n", "lex_dom\n", "
\n", "
str
\n", "\n", " Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "ln\n", "
\n", "
str
\n", "\n", " Lauw-Nida lexical classification (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "monad\n", "
\n", "
int
\n", "\n", " Monad\n", "\n", "
\n", "\n", "
\n", "
\n", "mood\n", "
\n", "
str
\n", "\n", " Gramatical mood of the verb (passive, etc)\n", "\n", "
\n", "\n", "
\n", "
\n", "nodeID\n", "
\n", "
str
\n", "\n", " Node ID (as in the XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "normalized\n", "
\n", "
str
\n", "\n", " Surface word stripped of punctations\n", "\n", "
\n", "\n", "
\n", "
\n", "nu\n", "
\n", "
str
\n", "\n", " Gramatical number (Singular, Plural)\n", "\n", "
\n", "\n", "
\n", "
\n", "number\n", "
\n", "
str
\n", "\n", " Gramatical number of the verb\n", "\n", "
\n", "\n", "
\n", "
\n", "orig_order\n", "
\n", "
int
\n", "\n", " Word order within corpus\n", "\n", "
\n", "\n", "
\n", "
\n", "otype\n", "
\n", "
str
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "person\n", "
\n", "
str
\n", "\n", " Gramatical person of the verb (first, second, third)\n", "\n", "
\n", "\n", "
\n", "
\n", "phrase\n", "
\n", "
int
\n", "\n", " Phrase number (counted per chapter)\n", "\n", "
\n", "\n", "
\n", "
\n", "phrasefunction\n", "
\n", "
str
\n", "\n", " Phrase function (abbreviated)\n", "\n", "
\n", "\n", "
\n", " \n", "
str
\n", "\n", " Phrase function (long description)\n", "\n", "
\n", "\n", "
\n", "
\n", "phrasetype\n", "
\n", "
str
\n", "\n", " Phrase type information\n", "\n", "
\n", "\n", "
\n", "
\n", "sentence\n", "
\n", "
int
\n", "\n", " Sentence number (counted per chapter)\n", "\n", "
\n", "\n", "
\n", "
\n", "sentencetype\n", "
\n", "
str
\n", "\n", " sentence type information\n", "\n", "
\n", "\n", "
\n", "
\n", "sp\n", "
\n", "
str
\n", "\n", " Part of Speech (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "sp_full\n", "
\n", "
str
\n", "\n", " Part of Speech (long description)\n", "\n", "
\n", "\n", "
\n", "
\n", "strongs\n", "
\n", "
str
\n", "\n", " Strongs number\n", "\n", "
\n", "\n", "
\n", "
\n", "subj_ref\n", "
\n", "
str
\n", "\n", " Subject reference (to nodeID in XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "tense\n", "
\n", "
str
\n", "\n", " Gramatical tense of the verb (e.g. Present, Aorist)\n", "\n", "
\n", "\n", "
\n", "
\n", "type\n", "
\n", "
str
\n", "\n", " Gramatical type of noun or pronoun (e.g. Common, Personal)\n", "\n", "
\n", "\n", "
\n", "
\n", "verse\n", "
\n", "
int
\n", "\n", " Verse number inside chapter\n", "\n", "
\n", "\n", "
\n", "
\n", "voice\n", "
\n", "
str
\n", "\n", " Gramatical voice of the verb\n", "\n", "
\n", "\n", "
\n", "
\n", "word\n", "
\n", "
str
\n", "\n", " Word as it appears in the text\n", "\n", "
\n", "\n", "
\n", "
\n", "oslots\n", "
\n", "
none
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Text-Fabric API: names N F E L T S C TF directly usable

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Loading-the-New-Testament-Text-Fabric (from local disk)\n", "NA = use (\"tjurg/NA1904\", checkData=\"clone\", hoist=globals())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Perform some basic display \n", "\n", "note: the implementation with regards how phrases need to be displayed (esp. with regards to conjunctions) is still to be done." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0.01s 25 results\n" ] }, { "data": { "text/html": [ "

verse 1" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #1
clausetype=S-V-O
phrase #1
phrasefunction_long=Predicate functionphrasetype=N2NP
Βίβλος
chapter=1gloss_EN=[The] bookgn=Femininenumber=Singularsp_full=noun
γενέσεως
chapter=1gloss_EN=of [the] genealogygn=Femininenumber=Singularsp_full=noun
Ἰησοῦ
chapter=1gloss_EN=of Jesusgn=Masculinenumber=Singularsp_full=noun
Χριστοῦ
chapter=1gloss_EN=Christgn=Masculinenumber=Singularsp_full=noun
υἱοῦ
chapter=1gloss_EN=songn=Masculinenumber=Singularsp_full=noun
Δαυεὶδ
chapter=1gloss_EN=of Davidgn=Masculinenumber=Singularsp_full=noun
υἱοῦ
chapter=1gloss_EN=songn=Masculinenumber=Singularsp_full=noun
Ἀβραάμ.
chapter=1gloss_EN=of Abrahamgn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 2" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #2
clausetype=Conj13CL
phrase #2
phrasefunction_long=Subject functionphrasetype=N2NP
Ἀβραὰμ
chapter=1gloss_EN=Abrahamgn=Masculinenumber=Singularsp_full=noun
phrase #3
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #4
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰσαάκ,
chapter=1gloss_EN=Isaacgn=Masculinenumber=Singularsp_full=noun
clause #3
clausetype=Conj13CL
phrase #5
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #6
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰσαὰκ
chapter=1gloss_EN=Isaacgn=Masculinenumber=Singularsp_full=noun
phrase #7
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #8
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰακώβ,
chapter=1gloss_EN=Jacobgn=Masculinenumber=Singularsp_full=noun
clause #4
clausetype=Conj13CL
phrase #9
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #10
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰακὼβ
chapter=1gloss_EN=Jacobgn=Masculinenumber=Singularsp_full=noun
phrase #11
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #12
phrasefunction_long=Object functionphrasetype=Pron2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰούδαν
chapter=1gloss_EN=Judahgn=Masculinenumber=Singularsp_full=noun
καὶ
chapter=1gloss_EN=andsp_full=conjunction
τοὺς
chapter=1gloss_EN=thegn=Masculinenumber=Pluralsp_full=determiner
ἀδελφοὺς
chapter=1gloss_EN=brothersgn=Masculinenumber=Pluralsp_full=noun
αὐτοῦ,
chapter=1gloss_EN=of himgn=Masculinenumber=Singularsp_full=pronoun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 3" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #5
clausetype=Conj13CL
phrase #13
phrasefunction_long=Object functionphrasetype=Pron2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #14
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰούδας
chapter=1gloss_EN=Judahgn=Masculinenumber=Singularsp_full=noun
phrase #15
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #16
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Φαρὲς
chapter=1gloss_EN=Perezgn=Masculinenumber=Singularsp_full=noun
καὶ
chapter=1gloss_EN=andsp_full=conjunction
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ζαρὰ
chapter=1gloss_EN=Zerahgn=Masculinenumber=Singularsp_full=noun
phrase #17
phrasefunction_long=Adverbial functionphrasetype=N2NP
ἐκ
chapter=1gloss_EN=out ofsp_full=preposition
τῆς
chapter=1gloss_EN=-gn=Femininenumber=Singularsp_full=determiner
Θάμαρ,
chapter=1gloss_EN=Tamargn=Femininenumber=Singularsp_full=noun
clause #6
clausetype=Conj13CL
phrase #18
phrasefunction_long=Adverbial functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #19
phrasefunction_long=Subject functionphrasetype=N2NP
Φαρὲς
chapter=1gloss_EN=Perezgn=Masculinenumber=Singularsp_full=noun
phrase #20
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #21
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἐσρώμ,
chapter=1gloss_EN=Hezrongn=Masculinenumber=Singularsp_full=noun
clause #7
clausetype=Conj13CL
phrase #22
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #23
phrasefunction_long=Subject functionphrasetype=N2NP
Ἐσρὼμ
chapter=1gloss_EN=Hezrongn=Masculinenumber=Singularsp_full=noun
phrase #24
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #25
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἀράμ,
chapter=1gloss_EN=Ramgn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 4" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #8
clausetype=Conj13CL
phrase #26
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #27
phrasefunction_long=Subject functionphrasetype=N2NP
Ἀρὰμ
chapter=1gloss_EN=Ramgn=Masculinenumber=Singularsp_full=noun
phrase #28
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #29
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἀμιναδάβ,
chapter=1gloss_EN=Amminadabgn=Masculinenumber=Singularsp_full=noun
clause #9
clausetype=Conj13CL
phrase #30
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #31
phrasefunction_long=Subject functionphrasetype=N2NP
Ἀμιναδὰβ
chapter=1gloss_EN=Amminadabgn=Masculinenumber=Singularsp_full=noun
phrase #32
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #33
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ναασσών,
chapter=1gloss_EN=Nahshongn=Masculinenumber=Singularsp_full=noun
clause #10
clausetype=Conj13CL
phrase #34
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #35
phrasefunction_long=Subject functionphrasetype=N2NP
Ναασσὼν
chapter=1gloss_EN=Nahshongn=Masculinenumber=Singularsp_full=noun
phrase #36
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #37
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Σαλμών,
chapter=1gloss_EN=Salmongn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 5" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #11
clausetype=Conj13CL
phrase #38
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #39
phrasefunction_long=Subject functionphrasetype=N2NP
Σαλμὼν
chapter=1gloss_EN=Salmongn=Masculinenumber=Singularsp_full=noun
phrase #40
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #41
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Βόες
chapter=1gloss_EN=Boazgn=Masculinenumber=Singularsp_full=noun
phrase #42
phrasefunction_long=Adverbial functionphrasetype=N2NP
ἐκ
chapter=1gloss_EN=out ofsp_full=preposition
τῆς
chapter=1gloss_EN=-gn=Femininenumber=Singularsp_full=determiner
Ῥαχάβ,
chapter=1gloss_EN=Rahabgn=Femininenumber=Singularsp_full=noun
clause #12
clausetype=Conj13CL
phrase #43
phrasefunction_long=Adverbial functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #44
phrasefunction_long=Subject functionphrasetype=N2NP
Βόες
chapter=1gloss_EN=Boazgn=Masculinenumber=Singularsp_full=noun
phrase #45
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #46
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰωβὴδ
chapter=1gloss_EN=Obedgn=Masculinenumber=Singularsp_full=noun
phrase #47
phrasefunction_long=Adverbial functionphrasetype=N2NP
ἐκ
chapter=1gloss_EN=out ofsp_full=preposition
τῆς
chapter=1gloss_EN=-gn=Femininenumber=Singularsp_full=determiner
Ῥούθ,
chapter=1gloss_EN=Ruthgn=Femininenumber=Singularsp_full=noun
clause #13
clausetype=Conj13CL
phrase #48
phrasefunction_long=Adverbial functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #49
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰωβὴδ
chapter=1gloss_EN=Obedgn=Masculinenumber=Singularsp_full=noun
phrase #50
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #51
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰεσσαί,
chapter=1gloss_EN=Jessegn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 6" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #14
clausetype=Conj13CL
phrase #52
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #53
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰεσσαὶ
chapter=1gloss_EN=Jessegn=Masculinenumber=Singularsp_full=noun
phrase #54
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #55
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Δαυεὶδ
chapter=1gloss_EN=Davidgn=Masculinenumber=Singularsp_full=noun
τὸν
chapter=1gloss_EN=thegn=Masculinenumber=Singularsp_full=determiner
βασιλέα.
chapter=1gloss_EN=kinggn=Masculinenumber=Singularsp_full=noun
clause #15
clausetype=Conj14CL
phrase #56
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #57
phrasefunction_long=Subject functionphrasetype=N2NP
Δαυεὶδ
chapter=1gloss_EN=Davidgn=Masculinenumber=Singularsp_full=noun
phrase #58
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #59
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Σολομῶνα
chapter=1gloss_EN=Solomongn=Masculinenumber=Singularsp_full=noun
phrase #60
phrasefunction_long=Adverbial functionphrasetype=N2NP
ἐκ
chapter=1gloss_EN=out ofsp_full=preposition
τῆς
chapter=1gloss_EN=the [wife]gn=Femininenumber=Singularsp_full=determiner
τοῦ
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Οὐρίου,
chapter=1gloss_EN=of Uriahgn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 7" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #16
clausetype=Conj14CL
phrase #61
phrasefunction_long=Adverbial functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #62
phrasefunction_long=Subject functionphrasetype=N2NP
Σολομὼν
chapter=1gloss_EN=Solomongn=Masculinenumber=Singularsp_full=noun
phrase #63
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #64
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ῥοβοάμ,
chapter=1gloss_EN=Rehoboamgn=Masculinenumber=Singularsp_full=noun
clause #17
clausetype=Conj14CL
phrase #65
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #66
phrasefunction_long=Subject functionphrasetype=N2NP
Ῥοβοὰμ
chapter=1gloss_EN=Rehoboamgn=Masculinenumber=Singularsp_full=noun
phrase #67
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #68
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἀβιά,
chapter=1gloss_EN=Abijahgn=Masculinenumber=Singularsp_full=noun
clause #18
clausetype=Conj14CL
phrase #69
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #70
phrasefunction_long=Subject functionphrasetype=N2NP
Ἀβιὰ
chapter=1gloss_EN=Abijahgn=Masculinenumber=Singularsp_full=noun
phrase #71
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #72
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἀσάφ,
chapter=1gloss_EN=Asagn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse 8" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

verse
clause #19
clausetype=Conj14CL
phrase #73
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #74
phrasefunction_long=Subject functionphrasetype=N2NP
Ἀσὰφ
chapter=1gloss_EN=Asagn=Masculinenumber=Singularsp_full=noun
phrase #75
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #76
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰωσαφάτ,
chapter=1gloss_EN=Jehoshaphatgn=Masculinenumber=Singularsp_full=noun
clause #20
clausetype=Conj14CL
phrase #77
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #78
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰωσαφὰτ
chapter=1gloss_EN=Jehoshaphatgn=Masculinenumber=Singularsp_full=noun
phrase #79
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #80
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ἰωράμ,
chapter=1gloss_EN=Joramgn=Masculinenumber=Singularsp_full=noun
clause #21
clausetype=Conj14CL
phrase #81
phrasefunction_long=Object functionphrasetype=N2NP
δὲ
chapter=1gloss_EN=thensp_full=conjunction
phrase #82
phrasefunction_long=Subject functionphrasetype=N2NP
Ἰωρὰμ
chapter=1gloss_EN=Joramgn=Masculinenumber=Singularsp_full=noun
phrase #83
phrasefunction_long=Verbal functionphrasetype=N2NP
ἐγέννησεν
chapter=1gloss_EN=begatmood=Indicativenumber=Singularperson=Thirdsp_full=verbtense=Aoristvoice=Active
phrase #84
phrasefunction_long=Object functionphrasetype=N2NP
τὸν
chapter=1gloss_EN=-gn=Masculinenumber=Singularsp_full=determiner
Ὀζείαν,
chapter=1gloss_EN=Uzziahgn=Masculinenumber=Singularsp_full=noun
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "Search0 = '''\n", "book book=Matthew\n", " chapter chapter=1\n", " \n", " verse\n", "'''\n", "Search0 = NA.search(Search0)\n", "NA.show(Search0, start=1, end=8, condensed=True, extraFeatures={'clausetype','sp_full','phrasetype', 'gloss_EN','person','tense','voice','number','gn','mood', 'phrasefunction_long'}, withNodes=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3 dump some structure information" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A heading is a tuple of pairs (node type, feature value)\n", "\tof node types and features that have been configured as structural elements\n", "These 3 structural elements have been configured\n", "\tnode type book with heading feature book\n", "\tnode type chapter with heading feature chapter\n", "\tnode type verse with heading feature verse\n", "You can get them as a tuple with T.headings.\n", "\n", "Structure API:\n", "\tT.structure(node=None) gives the structure below node, or everything if node is None\n", "\tT.structurePretty(node=None) prints the structure below node, or everything if node is None\n", "\tT.top() gives all top-level nodes\n", "\tT.up(node) gives the (immediate) parent node\n", "\tT.down(node) gives the (immediate) children nodes\n", "\tT.headingFromNode(node) gives the heading of a node\n", "\tT.nodeFromHeading(heading) gives the node of a heading\n", "\tT.ndFromHd complete mapping from headings to nodes\n", "\tT.hdFromNd complete mapping from nodes to headings\n", "\tT.hdMult are all headings with their nodes that occur multiple times\n", "\n", "There are 8231 structural elements in the dataset.\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING: 1 structure headings with hdMult occurrences (total 2)\n", "\tbook:I_Peter-chapter:4-verse:1 has 2 occurrences\n", "\t\t232892, 232894\n" ] } ], "source": [ "T.structureInfo()" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "232892 is an sentence which is not configured as a structure type\n" ] } ], "source": [ "T.up(232892)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',\n", " 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',\n", " 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',\n", " 'Converter_version': '0.1 (Initial)',\n", " 'Convertor_source': 'https://github.com/tonyjurg/NA1904/tree/main/resources/converter',\n", " 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n", " 'Editors': 'Nestle & Aland',\n", " 'Name': 'Greek New Testament (NA1904)',\n", " 'TextFabric version': '11.2.3',\n", " 'Version': '1904',\n", " 'fmt:text-orig-full': '{word}',\n", " 'sectionFeatures': 'book,chapter,verse',\n", " 'sectionTypes': 'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " 'writtenBy': 'Text-Fabric',\n", " 'dateWritten': '2023-03-21T20:46:23Z'}" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TF.features['otext'].metaData" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running text fabric browser \n", "##### [back to TOC](#TOC)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.2.3\n", "Connecting to running kernel via 19685\n", "Connecting to running webserver via 29685\n", "Opening app in browser\n", "Press to stop the TF browser\n" ] } ], "source": [ "!text-fabric app " ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 11.2.3\n", "Killing processes:\n", "kernel % 18412: 19685 app: terminated\n", "web % 2700: 29685 app: terminated\n", "text-fabric % 3280 app: terminated\n", "3 processes done.\n" ] } ], "source": [ "!text-fabric app -k" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "321.391px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }