{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Named Entity Recognition in French biomedical text\n", "===========================================\n", "by Andrés Soto Villaverde\n", "-------------------------------------\n", "[LinkedIn profile](https://www.linkedin.com/in/andres-soto-villaverde-36198a5/)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Named-Entity Recognition (NER) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we will face the problem of identifying entities in French biomedical texts taken from [The QUAERO French Medical Corpus](https://quaerofrenchmed.limsi.fr/), developed as a resource for Named Entity Recognition and a gold standard set of normalized entities for French biomedical text. A selection of MEDLINE titles and EMEA documents were manually annotated, following the concepts in the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this corpus, ten types of clinical entities were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures with the labels: ANAT, CHEM, DEVI, DISO, GEOG, LIVB, OBJC, PHEN, PHYS, PROC. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this notebook, we will only use the MEDLINE texts. [MEDLINE](https://www.nlm.nih.gov/bsd/medline.html) is the U.S. National Library of Medicine® (NLM) premier bibliographic database that contains more than 25 million references to journal articles in life sciences with a concentration on biomedicine." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's show a sample annotation for a MEDLINE text:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Sample MEDLINE title 1**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " *Chirurgie de la communication interauriculaire du type \" sinus venosus \" .*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Sample MEDLINE title 1 annotations**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "T1 PROC 0 9 Chirurgie\n", "\n", "\n", "T2 DISO 16 46 communication interauriculaire" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This means that the text between characters 0 and 9 is assigned a label **PROC** (= procedure). The token which corresponds to this text is “**Chirurgie**”. \n", "Second annotation is for the text between characters 16 and 46 (which covers tokens “**communication interauriculaire**”) and is assigned label **DISO** (= disorder).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Therefore, we are interested to train a classifier able to extract those text segments and identify them with the correct label. We will use a class of statistical modeling method used for structured prediction known as Conditional Random Fields (CRFs), which falls into the sequence modeling family. Whereas a discrete classifier predicts a label for a single sample without considering \"neighboring\" samples, a CRF can take context into account. They are used to encode known relationships between observations and construct consistent interpretations and are often used for labeling or parsing of sequential data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The corpus contains three subdirectories: train, test and dev. For this notebook, we will use only the first one. It contains 1670 files, including 4 files about configuration and statistics. The rest of the files is divided in two types: .TXT files which contain the text of the sentences and annotations files (.ann) with information about the text segments, its types, etc., as we explained below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will divide this presentation in two sections: \n", "1. the preprocessing section, which I will explain below, and \n", "2. the train and testing section which will be explained in another notebook, which you can read following [this link](https://nbviewer.jupyter.org/github/andressotov/Named-Entity-Recognition-in-French-biomedical-text/blob/master/Named%20Entity%20Recognition%20in%20French%20biomedical%20text%20Part%202.ipynb). \n", "\n", "Following there are a list of functions used for preprocessing the data. \n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# path to the data train set \n", "path_train = \"C:/Users/Andres/Jupyter Notebooks/Shedd test/Named Entity Recognition in French biomedical text/data/MEDLINE_train\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following function reads a file from the 'path' with extension 'ext' and name 'file' as a list of lines and returns this list." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def read_file(path,file,ext): \n", " f = path+'\\\\'+file+ext\n", " with open(f, 'rt', encoding='utf-8') as myfile:\n", " data = myfile.readlines()\n", " return data" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[\"L' OMS planifie pour l' Europe l' application du processus des soins infirmiers . Compte rendu de la session du groupe technique d' experts en soins infirmiers et obstétricaux du Bureau régional de l' Europe de l' OMS , Nottingham , 14 - 17 décembre 1976\\n\"]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# To read a file and obtain its content\n", "data = read_file(path_train,\"14448\",\".txt\")\n", "data" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['T1\\tGEOG 24 30\\tEurope\\n',\n", " '#1\\tAnnotatorNotes T1\\tC0015176\\n',\n", " 'T2\\tPHEN 49 58\\tprocessus\\n',\n", " '#2\\tAnnotatorNotes T2\\tC1522240\\n',\n", " 'T3\\tPROC 63 79\\tsoins infirmiers\\n',\n", " '#3\\tAnnotatorNotes T3\\tC0028682\\n',\n", " 'T4\\tLIVB 69 79\\tinfirmiers\\n',\n", " '#4\\tAnnotatorNotes T4\\tC0028676\\n',\n", " 'T5\\tPROC 143 159\\tsoins infirmiers\\n',\n", " '#5\\tAnnotatorNotes T5\\tC0028682\\n',\n", " 'T6\\tLIVB 149 159\\tinfirmiers\\n',\n", " '#6\\tAnnotatorNotes T6\\tC0028676\\n',\n", " 'T7\\tGEOG 201 207\\tEurope\\n',\n", " '#7\\tAnnotatorNotes T7\\tC0015176\\n']\n" ] } ], "source": [ "import pprint\n", "\n", "data = read_file(path_train,\"14448\",\".ann\")\n", "pprint.pprint(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Observe that the first file read was \"14448.txt\" while the second one was \"14448.ann\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To transform the text contained into a .ANN file to a dictionary, we use the following function to process its lines. The function ignores the comments (i.e. lines beginning with '#'). For the other lines, it split the lines in 3 parts separated by 'TAB' char:\n", "1. the line id which indicates the dictionary key for this segment\n", "2. the label part (i.e. a label and two integers)\n", "3. the text segment that indicated in the label part\n", " \n", "Furthermore, it checks if the label part contains ';'. In that case, it inserts ' ' before and after the ';'. Then, the label part is split in its elements separated by ' ' and the '\\n' character at the end of the text is removed. It returns a dictionary with this information" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def ann_text2dict(lines):\n", " d = {}\n", " for l in lines:\n", " if not l.startswith('#'):\n", " t = l.split('\\t')\n", " if ';' in t[1]:\n", " t[1] = t[1].replace(';',' ; ')\n", " d[t[0]] = {\n", " 'label':t[1].split(' '),\n", " 'text':t[2].replace('\\n','')\n", " }\n", " return d" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'T1': {'label': ['GEOG', '24', '30'], 'text': 'Europe'},\n", " 'T2': {'label': ['PHEN', '49', '58'], 'text': 'processus'},\n", " 'T3': {'label': ['PROC', '63', '79'], 'text': 'soins infirmiers'},\n", " 'T4': {'label': ['LIVB', '69', '79'], 'text': 'infirmiers'},\n", " 'T5': {'label': ['PROC', '143', '159'], 'text': 'soins infirmiers'},\n", " 'T6': {'label': ['LIVB', '149', '159'], 'text': 'infirmiers'},\n", " 'T7': {'label': ['GEOG', '201', '207'], 'text': 'Europe'}}\n" ] } ], "source": [ "d = ann_text2dict(data)\n", "pprint.pprint(d)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function *collect_files* collects a list with the names of the files with extension .TXT and .ANN contained in the indicated path. The file names without the extension are stored in two independent lists and each list is saved in a pickle file. The file name contains the set name and the extension name. We also included two functions to save and load pickle files easily. \n", "Pickle is used for serializing and de-serializing Python object structures, also called marshalling or flattening. Serialization refers to the process of converting an object in memory to a byte stream that can be stored on disk or sent over a network. Later on, this character stream can then be retrieved and de-serialized back to a Python object. " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "import os\n", "import pickle\n", "\n", "def save_pickle(data,file):\n", " pick_file = open(file+\".pkl\", \"wb\")\n", " pickle.dump(data, pick_file)\n", " pick_file.close()\n", "\n", "def load_pickle(file):\n", " pick_file = open(file+\".pkl\", \"rb\")\n", " data = pickle.load(pick_file)\n", " pick_file.close()\n", " return data\n", "\n", "def collect_files(path,set):\n", " dirs = os.listdir(path)\n", " ltxt = []\n", " lann = []\n", " for f in dirs:\n", " if f.endswith('.txt'):\n", " f = f.replace('.txt','')\n", " ltxt.append(f)\n", " elif f.endswith('.ann'):\n", " f = f.replace('.ann','')\n", " lann.append(f)\n", " else:\n", " pass\n", " save_pickle(ltxt,set+'_txt')\n", " save_pickle(lann,set+'_ann')" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "collect_files(path_train,'train')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The funtion *ann_files2dict* processes all the files .ANN contained in path, transforming the text contained into each file to a dictionary. The dictionary are collected into a list and saved into a pickle file. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def ann_files2dict(pic_file,path,set):\n", " lann = load_pickle(pic_file)\n", " lnew = []\n", " c = 0\n", " for ann in lann:\n", " data = read_file(path, ann, \".ann\")\n", " dic = ann_text2dict(data)\n", " c+=1\n", " lnew.append(dic)\n", "\n", " save_pickle(lnew,set+'_ann_dics')\n", " return lnew" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# of ann files 833\n" ] } ], "source": [ "lnew = ann_files2dict('train_ann',path_train,'train')\n", "\n", "print(\"# of ann files\",len(lnew))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is one situation that we didn’t mentioned before: it is possible that more labels are assigned to the same token (annotations overlap). In this case, we will only choose one of them and discard the other. For example, let’s assume that we have the following text: " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Prévalence des marqueurs des *virus des hépatites A* , B , C à La Réunion ( Hôpital sud et prison de Saint Pierre ).\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the following annotations :" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "T1 CHEM 15 24 marqueurs\n", "\n", "T2 LIVB 29 34 virus\n", "\n", "T3 DISO 39 50 hépatites A\n", "\n", "T4 DISO 39 48;53 54 hépatites B\n", "\n", "T5 DISO 39 48;57 58 hépatites C\n", "\n", "T6 GEOG 61 71 La Réunion\n", "\n", "T7 LIVB 29 48;57 58 virus des hépatites C\n", "\n", "T8 LIVB 29 48;53 54 virus des hépatites B\n", "\n", "T9 LIVB 29 50 virus des hépatites A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that:\n", "* annotation T2 identifies the word 'virues' (characters 29-34) as a Living Being (LIVB),\n", "* annotation T9 identifies the segment 'virus des hépatites A' (characters 29-50) as a Living Being (LIVB),\n", "* annotation T8 identifies the segment 'virus des hépatites B' (characters 29-48 and 53-54) as a Living Being (LIVB), and\n", "* annotation T7 identifies the segment 'virus des hépatites C' (characters 29-48 and 57-58) as a Living Being (LIVB)\n", "\n", "In those cases, we will discard the annotation T2 which is included into the others and keep T7, T8 and T9." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How to detect those situations? Looking at the indexes of each text segment, we can see that the text segment between characters 29-34 (word 'virus') is contained in the other three segments. \n", "\n", "Let's define a function to verify if range r1 is contained into range r2." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def is_subrange(r1,r2):\n", " [a1,b1] = r1\n", " [a2,b2] = r2\n", " if int(a2) <= int(a1) and int(b1) <= int(b2): # [a1,b1] subrange of [a2,b2]\n", " return True\n", " else:\n", " return False" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the following function 'contained', we will verify if a certain segment identified by its key (i.e. T2 in the previous example) is contained into another of the segments included into the annotation, returning True or False according to this question." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def contained(key, ann_dic):\n", " piv = ann_dic[key]['label']\n", " for k in ann_dic.keys():\n", " if not k == key:\n", " lab_field = ann_dic[k]['label']\n", " if len(lab_field) == 3:\n", " if len(piv) == 3:\n", " if is_subrange([piv[1],piv[2]],\n", " [lab_field[1],lab_field[2]]):\n", " return True\n", " return False" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NOTE: observe that the function only considers simple segments (i.e. continuous segments), not the complex ones with more than one range. Later we will talk more about that.\n", "\n", "The following function verifies, for each one of the keys in an annotation dictionary, if its range is contained in some other range. In those cases, the contained ranges are eliminated" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def remove_contained(ann_dic):\n", " lrem = []\n", " for k in ann_dic.keys():\n", " if contained(k,ann_dic):\n", " lrem.append(k)\n", " for i in lrem:\n", " del ann_dic[i]\n", " return ann_dic\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try it with the annotation dictionary previously obtained d" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'T1': {'label': ['GEOG', '24', '30'], 'text': 'Europe'},\n", " 'T2': {'label': ['PHEN', '49', '58'], 'text': 'processus'},\n", " 'T3': {'label': ['PROC', '63', '79'], 'text': 'soins infirmiers'},\n", " 'T4': {'label': ['LIVB', '69', '79'], 'text': 'infirmiers'},\n", " 'T5': {'label': ['PROC', '143', '159'], 'text': 'soins infirmiers'},\n", " 'T6': {'label': ['LIVB', '149', '159'], 'text': 'infirmiers'},\n", " 'T7': {'label': ['GEOG', '201', '207'], 'text': 'Europe'}}\n" ] } ], "source": [ "pprint.pprint(d)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'T1': {'label': ['GEOG', '24', '30'], 'text': 'Europe'},\n", " 'T2': {'label': ['PHEN', '49', '58'], 'text': 'processus'},\n", " 'T3': {'label': ['PROC', '63', '79'], 'text': 'soins infirmiers'},\n", " 'T5': {'label': ['PROC', '143', '159'], 'text': 'soins infirmiers'},\n", " 'T7': {'label': ['GEOG', '201', '207'], 'text': 'Europe'}}\n" ] } ], "source": [ "d1 = remove_contained(d)\n", "pprint.pprint(d1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that segments T4 and T6 were removed because T4 was contained into T3 and T6 was contained into T5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following functions count the number of non continuous segments and the total number of segments associated with 'ann_dic' and the number of non continuous segments and the total number of segments associated with all the dictionaries contained in the train set. " ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "def cont_ncont(ann_dic):\n", " nnc = 0 # number of non continuous segments\n", " ntot = 0 # total number of segments\n", " for k in ann_dic.keys():\n", " piv = ann_dic[k]['label']\n", " ntot+=1\n", " if not len(piv) == 3:\n", " nnc+=1\n", " return [nnc,ntot]\n", "\n", "\n", "def count_non_continuous(set):\n", " ldics = load_pickle(set + '_ann_dics')\n", " cnc = 0 # cont non continuous segments\n", " ctot = 0 # cont total segments\n", " for i in range(len(ldics)):\n", " [nnc,ntot] = cont_ncont(ldics[i])\n", " cnc+=nnc\n", " ctot+=ntot\n", " print(\"set\",set)\n", " print(\"Number of non continuous segments\",cnc,'%',(cnc/ctot)*100)\n", " print(\"Total number of segments\",ctot)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "set train\n", "Number of non continuous segments 13 % 0.43420173680694724\n", "Total number of segments 2994\n" ] } ], "source": [ "count_non_continuous('train')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It shows that the number of non-contiguous segments is very low and that is why we decided to ignore them for this version. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The information that we need to use to train the classifier is contained in two independent structures: the .TXT files and the annotation dictionaries. Let's now combine and simplfy them.\n", "\n", "The function *simple_dic* will be used to simplify the dictionary structure." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "def simple_dic(ann_dic):\n", " lista = []\n", " for t in ann_dic.keys():\n", " pt = ann_dic[t]\n", " dic = {\n", " 'label' : pt['label'][0],\n", " 'range' : pt['label'][1:],\n", " 'text' : pt['text']\n", " }\n", " lista.append(dic)\n", " return lista" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[{'label': 'GEOG', 'range': ['24', '30'], 'text': 'Europe'},\n", " {'label': 'PHEN', 'range': ['49', '58'], 'text': 'processus'},\n", " {'label': 'PROC', 'range': ['63', '79'], 'text': 'soins infirmiers'},\n", " {'label': 'PROC', 'range': ['143', '159'], 'text': 'soins infirmiers'},\n", " {'label': 'GEOG', 'range': ['201', '207'], 'text': 'Europe'}]\n" ] } ], "source": [ "sdic = simple_dic(d1)\n", "pprint.pprint(sdic)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function *mix_txt_ann* combines both the 'ann' structure and the corresponding text of the sentence from the 'txt file in just one dictionary with two keys: 'txt' and 'ann_dic'. The resulting dictionaries are stored in a list and saved into a pickle file called 'train_txt_ann'" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "def mix_txt_ann(pic_file,path,set):\n", " ltxt = load_pickle(pic_file)\n", " lnew = []\n", " for i in range(len(ltxt)):\n", " data = read_file(path, ltxt[i], \".txt\")\n", " ldics = load_pickle(set + '_ann_dics')\n", " ann_dic = remove_contained(ldics[i])\n", " ndic ={\n", " 'txt':data,\n", " 'ann_dic': simple_dic(ann_dic)\n", " }\n", " lnew.append(ndic)\n", " save_pickle(lnew,set+'_txt_ann')" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'ann_dic': [{'label': 'PROC', 'range': ['0', '10'], 'text': 'Traitement'},\n", " {'label': 'DISO',\n", " 'range': ['15', '36'],\n", " 'text': 'métastases hépatiques'},\n", " {'label': 'DISO',\n", " 'range': ['41', '60'],\n", " 'text': 'cancers colorectaux'}],\n", " 'txt': ['Traitement des métastases hépatiques des cancers colorectaux : '\n", " \"jusqu' où aller ?\\n\"]}\n" ] } ], "source": [ "set = 'train'\n", "mix_txt_ann('train_txt',path_train,set)\n", "lista = load_pickle(set+'_txt_ann')\n", "pprint.pprint(lista[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can simplify more this structure converting the list of dictionaries that corresponds to the annotation part in a list of tuples. We need also to tag all segments included in the TXT segments. We already have some of them tagged, but others don't. We will tag them as 'NONE' indicating that this tag is none of the others. \n", "\n", "The funstion *ldic2ltup* converts a list of dictionaries into a list of tuples, while the function *complete_segments* tag with 'NONE' the other non-tagged segments." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "def ldic2ltup(i,listai):\n", " ann_dic = listai['ann_dic']\n", " txt = listai['txt'][0]\n", " ltup = []\n", " for dic in ann_dic:\n", " etiq = dic['label']\n", " rango = dic['range']\n", " if len(rango) < 3:\n", " # print(\"<3\")\n", " a = int(rango[0])\n", " b = int(rango[1])\n", " ltup.append((a, b, etiq, txt[a:b]))\n", " #else:\n", " # print(\"varios tuplos\")\n", " ltup.sort()\n", " return ltup" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "def complete_segments(set):\n", " lista = load_pickle(set + '_txt_ann')\n", " newl = []\n", " for i in range(len(lista)): # len(lista)):\n", " txt = lista[i]['txt'][0] # a list with one elem\n", " ltup = ldic2ltup(i, lista[i])\n", " ltup.sort()\n", " lt1 = []\n", " if len(ltup) == 0:\n", " tup = (0,len(txt),'NONE',txt)\n", " lt1.append(tup)\n", " continue\n", " if ltup[0][0]>0:\n", " a = 0\n", " b = ltup[0][0]-1\n", " tup = (a,b,'NONE',txt[a:b])\n", " lt1.insert(0,tup)\n", " for j in range(len(ltup)-1):\n", " if ltup[j][1]+1 == ltup[j+1][0]: # consecutives\n", " lt1.append(ltup[j])\n", " else: # non consecutives\n", " a = ltup[j][1]+1\n", " lt1.append(ltup[j]) # previous one\n", " b = ltup[j+1][0]-1\n", " tup = (a,b,'NONE',txt[a:b])\n", " lt1.append(tup) # new one\n", " lt1.append(ltup[-1])\n", " if ltup[-1][1] < len(txt):\n", " a = ltup[-1][1]+1\n", " b = len(txt) -1\n", " tup = (a,b,'NONE',txt[a:b])\n", " lt1.append(tup)\n", " lt1.sort()\n", " newl.append( {\n", " 'ann_dic' : lt1,\n", " 'txt' : txt\n", " } )\n", " save_pickle(newl,set + '_txt_ann2')" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'ann_dic': [(0, 10, 'PROC', 'Traitement'),\n", " (11, 14, 'NONE', 'des'),\n", " (15, 36, 'DISO', 'métastases hépatiques'),\n", " (37, 40, 'NONE', 'des'),\n", " (41, 60, 'DISO', 'cancers colorectaux'),\n", " (61, 80, 'NONE', \": jusqu' où aller ?\")],\n", " 'txt': \"Traitement des métastases hépatiques des cancers colorectaux : jusqu' \"\n", " 'où aller ?\\n'}\n" ] } ], "source": [ "set = 'train'\n", "complete_segments(set)\n", "new1 = load_pickle(set + '_txt_ann2')\n", "pprint.pprint(new1[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the help of function *ldic2ltok_lab*, we will tokenize each one of the text segments and tag each token with the corresponding tag i.e. the segment tag" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "from nltk import RegexpTokenizer\n", "\n", "def ldic2ltok_lab(lsent):\n", " ls_tok_lab = []\n", " toknizer = RegexpTokenizer(r'''\\w'|\\w+|[^\\w\\s]''')\n", " for sent in lsent:\n", " ltup = sent['ann_dic']\n", " lt_tok_lab = []\n", " for (a, b, lab, txt) in ltup: # un segmento\n", " lts = toknizer.tokenize(txt)\n", " ltoks = [(t, lab) for t in lts]\n", " lt_tok_lab.extend(ltoks)\n", " ls_tok_lab.append(lt_tok_lab)\n", " # print(ls_tok_lab)\n", " return ls_tok_lab" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "ls_tok_lab = ldic2ltok_lab(new1)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "save_pickle(ls_tok_lab,set + '_txt_ann3')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this way we finish the preprocessing section. You can access the training and test section through the [following link](https://nbviewer.jupyter.org/github/andressotov/Named-Entity-Recognition-in-French-biomedical-text/blob/master/Named%20Entity%20Recognition%20in%20French%20biomedical%20text%20Part%202.ipynb). Thank you very much for reading me and I hope you have found interesting the explanation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }