{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Reading and using numerical data from text files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reading in data from text files is a common task in data analysis and plotting. Python has many options for dealing with text files that contain numerical data (as well as text-based data if that is your thing). These examples start from the assumption that you have a textfile containing a list of data, possibly spanning multiple columns. We will go through different ways that you can import and manipulate this data with Python and NumPy.\n", "\n", "The sample data file we are working with is a plain text file containing radar rainfall precipitation rates. Each column corresponds to a different region, and every line is the rainfall rate at 60 minute intervals.\n", "\n", "## Numpy's `np.loadtxt`\n", "This is the simplest method. (And probably the most limited.) It uses a numpy method called `loadtxt` to read in textfile data, as long as the data is regularly formatted (an MxN grid of data). It also assumes you have no missing values (see `np.genfromtxt` if this applies). Some more advanced methods are covered towards the end of this tutorial if you have more complex data to work with.\n", "\n", "The function returns numpy array based on the dimensions of your data file." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "myData = np.loadtxt(\"test_rainfile_hourly.txt\")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 3.2, 0.6, 0.6, ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " ..., \n", " [ 0.1, 1.1, 1. , ..., 12.5, 12.5, 3.6],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "myData" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the object `myData` is a 2-Dimensional array, filled with the values in the sample text data file. The `loadtxt` method automatically creates for us an array with all of our data parsed into it from the text file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extracting subsets of data\n", "Now we are going to extract certain subsets of this (or these...) data. We can do this with something called array splicing - a built in feature of numpy arrays. Suppose we want the first column of our data:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([ 3.2, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 1.5, 15.8, 0. , 0.3, 1. ,\n", " 0. , 0. , 0. , 0.1, 0. , 0. ])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "myData[:,0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Explanation:** Numpy (and Python in general) stores array data in row-major order. So when you want to access a certain data element within an array, you specifiy the row first, and then the column. (Note that Python starts counting array positions from zero!)\n", "\n", "`someArray[row_index][column_index]`\n", "\n", "The colon symbol is used to specifiy a range of values, but without any bounds specified it effectively means \"Give me the full range of values in that dimension\". So here we are using the `:` symbol as a wildcard to say we want all elements from the row dimension, and `0` to say we want the \"zeroth-column\". This gives us a *view* of the array's first column.\n", "\n", "If we wanted the first row, we could just specifiy the following:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([ 3.2, 0.6, 0.6, 0.6, 0.6, 0.6, 0.3, 0.3, 0.3, 0.3, 0.3,\n", " 0.1, 0.1, 0.1, 0.1, 0.1, 0. , 4.3, 2. , 2. , 2. , 2. ,\n", " 2. , 2.3, 2.3, 2.3, 2.3, 2.3, 0.1, 0.1, 0.1, 0.1, 0.1,\n", " 0. , 4.3, 2. , 2. , 2. , 2. , 2. , 2.3, 2.3, 2.3, 2.3,\n", " 2.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0. , 4.3, 2. , 2. , 2. ,\n", " 2. , 2. , 2.3, 2.3, 2.3, 2.3, 2.3, 0.1, 0.1, 0.1, 0.1,\n", " 0.1, 0. , 4.3, 2. , 2. , 2. , 2. , 2. , 2.3, 2.3, 2.3,\n", " 2.3, 2.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0. , 4.3, 2. , 2. ,\n", " 2. , 2. , 2. , 2.3, 2.3, 2.3, 2.3, 2.3, 0.1, 0.1, 0.1,\n", " 0.1, 0.1, 0. , 5.5, 2. , 2. , 2. , 2. , 2. , 0.3, 0.3,\n", " 0.3, 0.3, 0.3, 0. , 0. , 0. , 0. , 0. , 0. , 5.5, 2. ,\n", " 2. , 2. , 2. , 2. , 0.3, 0.3, 0.3, 0.3, 0.3, 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 5.5, 2. , 2. , 2. , 2. , 2. , 0.3,\n", " 0.3, 0.3, 0.3, 0.3, 0. , 0. , 0. , 0. , 0. , 0. , 5.5,\n", " 2. , 2. , 2. , 2. , 2. , 0.3, 0.3, 0.3, 0.3, 0.3, 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 5.5, 2. , 2. , 2. , 2. , 2. ,\n", " 0.3, 0.3, 0.3, 0.3, 0.3, 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 7.5, 0.6, 0.6, 0.6, 0.6, 0.6, 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 7.5, 0.6, 0.6, 0.6, 0.6,\n", " 0.6, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 7.5, 0.6, 0.6, 0.6, 0.6, 0.6, 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 7.5, 0.6, 0.6, 0.6,\n", " 0.6, 0.6, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 7.5, 0.6, 0.6, 0.6, 0.6, 0.6, 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.6, 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0.6, 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.6, 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0.6, 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.6,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0.4, 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0.4, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0.4, 0. , 0. , 0. , 0. ,\n", " 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,\n", " 0. ])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "myData[0,:]\n", "# (give me the \"zero-th\" row, and then all the columns in that row)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly, if we want the second and third columns, we can do this:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0.6, 0.6],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 1.5, 0.4],\n", " [ 21.1, 21.1],\n", " [ 1.7, 1.7],\n", " [ 1.1, 1.1],\n", " [ 1.3, 1. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ],\n", " [ 1.1, 1. ],\n", " [ 0. , 0. ],\n", " [ 0. , 0. ]])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "myData[:,1:3]\n", "# give me all the rows, and the columns in the range 1-3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The syntax of the array slicing is perhaps slightly confusing at first. You may have thought (or forgot, like I did when writing this) that when you specify a range of elements in an array slice you get the range inclusive of the upper bound, **but actually** the range is up to **but not including** the upper range bound. i.e. `myData[:,1:2]` would only give you one column of data.\n", "\n", "#### Summary\n", "1. The colon syntax (`:`) specifies a range. However, omitting any bounds on the range will return the whole set of data from that dimension.\n", "1. Range bounds are not inclusive of the upper bound.\n", "1. Array indexing is row-major. (Like C, but not like Fortran)\n", "\n", "### More slicing\n", "You can extract a rectangular subset of data using the same notation: Here we are going to ask for four rows of data, and from those rows, two columns of data." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.]])" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "myData[2:6,1:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lo and behold, we now have a 4x2 subset of the original `myData` array. (Don't worry that they are all zeros - that's just part of the dataset and is correct)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Array views vs Array copies\n", "By design of the language, creating a new 'view' of an array by slicing **does not** create a copy of that array. It simply creates a \"view\" of the original array (or a \"reference\" to the original array, if you like the terminology of references or (god forbid) pointers and that kind of stuff.) If you modify the original array, your view will also change as well. This might be what you intended, but sometimes it is not.\n", "\n", "If you want a separate array to work on, which is not just a view of the original array, use the copy function:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.]])" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = myData[2:6,1:3].copy()\n", "b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`b` is now a separate array object, with dimensions of 4x2. It takes up its own space in memory (which is why you should think about whether you really need a copy of an entire array if dealing with massive datasets).\n", "\n", "Now look what happens if you don't use copy, but use a simple assignment operator:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.],\n", " [ 0., 0.]])" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c = myData[2:6,1:3]\n", "c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`c` is a view of a subset of `myData`. It is not a separate array object. Let's modifiy the original array by adding 10 to all the values:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false }, "outputs": [], "source": [ "myData += 10" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 10., 10.],\n", " [ 10., 10.],\n", " [ 10., 10.],\n", " [ 10., 10.]])" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`c` will reflect any changes made to the original array, `myData`.\n", "\n", "In this case, the variable `c` only refers to a view of the original array `myData`. Therefore, anything we do to `myData` will affect `c`, since `c` is only a reference to the original array. \n", "\n", "In contrast, when we created `b`, we explicitly requested the data from `myData` to be copied to a new array. Anything we subsequently do to `myData` will not affect the array `b`. Be sure to make copies of arrays when you need to." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Some other methods\n", "**np.loadtxt** is not the only way of loading data. In fact, for large datasets, it is often very slow compared to other methods. I won't go into full detail here but there are other methods you can use such as:\n", "\n", "### `csv.reader`\n", "Python has a native csv (comma separated variable) file reader. Note that the CSV file does not have to actually be \"comma separated\", as you can specify what the delimter is. We can use it on our original (space separated) data file for example:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 3.2, 0.6, 0.6, ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " ..., \n", " [ 0.1, 1.1, 1. , ..., 12.5, 12.5, 3.6],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]])" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import csv\n", "\n", "data = [] # create an empty list (note: not a numpy array, yet!)\n", "\n", "with open(\"test_rainfile_hourly.txt\", 'rb') as csvfile:\n", " csvreader = csv.reader(csvfile, delimiter=\" \")\n", " for row in csvreader:\n", " data.append(row) \n", " \n", "dataArray = np.array(data, dtype=float) # convert the standard python list into a swish numpy array\n", "dataArray" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's an important detail in the last couple of lines above. We specified a `dtype` - the type of data that the textfile contains. In our case, the data type is floating point numbers (`float`) - by default the csv reader will generate strings, which is probably not what you want in scientific data processsing. We don't normally specify data types explicity in Python, but here's an example of when it is necessary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `np.genfromtxt`\n", "This method is similar to `np.loadtxt` but offers a few more options, such as how to deal with missing values etc. For example if our data file contain NaNs, and we wanted to replace them with numbers, we could do:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "dataArray2 = np.genfromtxt(\"test_rainfile_hourly.txt\", missing_values=\"NaN\", filling_values=-1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Some other options for `np.loadtxt` and `np.genfromtxt`\n", "There are some other nifty options for `loadtxt` and `genfromtxt` that you can use to easily pick out specific columns from a textfile, or skip header rows for example.\n", "\n", "#### usecols\n", "Pick out specific columns (do not have to be next to each other). Note that the 'first' column is column 0." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 3.2, 0.6, 0.6],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 1.5, 0.4, 0.4],\n", " [ 15.8, 21.1, 21.1],\n", " [ 0. , 1.7, 1.7],\n", " [ 0.3, 1.1, 1.1],\n", " [ 1. , 1. , 1. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0.1, 1. , 1. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ]])" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataArray3 = np.loadtxt(\"test_rainfile_hourly.txt\", usecols=(0,4,5))\n", "dataArray3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### skiprows\n", "Useful if you have header information in the text file. Or you just want to ignore given number of rows from the data." ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 1.5, 0.4, 0.4],\n", " [ 15.8, 21.1, 21.1],\n", " [ 0. , 1.7, 1.7],\n", " [ 0.3, 1.1, 1.1],\n", " [ 1. , 1. , 1. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0.1, 1. , 1. ],\n", " [ 0. , 0. , 0. ],\n", " [ 0. , 0. , 0. ]])" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataArray4 = np.loadtxt(\"test_rainfile_hourly.txt\", usecols=(0,4,5), skiprows=5)\n", "dataArray4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 'Unpacking' columns into separate variables\n", "This is a handy method for extracting colums into separate arrays. Insteading of getting one massive array for the whole dataset, we can get back back individual arrays for each one, by specifiying **unpack=True**" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 3.2 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", " 0. 1.5 15.8 0. 0.3 1. 0. 0. 0. 0.1 0. 0. ]\n", "[ 0.6 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", " 0. 0.4 21.1 1.7 1.1 1. 0. 0. 0. 1. 0. 0. ]\n", "[ 0.6 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n", " 0. 0.4 21.1 1.7 1.1 1. 0. 0. 0. 1. 0. 0. ]\n" ] } ], "source": [ "x, y, z = np.loadtxt(\"test_rainfile_hourly.txt\", usecols=(0,4,5), unpack=True)\n", "\n", "print x\n", "print y\n", "print z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### pandas.read_csv\n", "Pandas is a powerful Python module specifically aimed at data processing. It is separate from numpy, and probably best suited to situations when you want to do lots of loading and modifying of data tables. It has a level of functionality approaching database software. I won't go into it here but if you are interested in using it, there is a function called **read_csv** which has many, many options for reading text files into pandas data frames. Pandas dataframes can be easily converted into standard numpy arrays.\n", "\n", "http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table\n", "\n", "### Doing it the hard way...\n", "\n", "This is how I used to do it before I realised numpy had a `loadtxt` function (\\*facepalm\\*). This method could come in handy if you really need some fine-grained control over how the file was processed, or you have irregularly shaped data. For example, rows/columns of varying lengths. Here it is for completeness:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "collapsed": true }, "outputs": [], "source": [ "f = open(\"test_rainfile_hourly.txt\",'r') # open file\n", "lines = f.readlines()[4:] # read in the data using the readlines function (the last part skips 4 header lines)\n", "no_lines = len(lines) # get the number of lines (=number of data)\n", "\n", "# data variables\n", "rainfall_zone1 = np.zeros(no_lines, dtype=np.float)\n", "rainfall_zone2 = np.zeros(no_lines, dtype=np.float) \n", "rainfall_zone3 = np.zeros(no_lines, dtype=np.float) \n", "rainfall_zone4 = np.zeros(no_lines, dtype=np.float) \n", "\n", "for i in range (0,no_lines):\n", " line = lines[i].strip().split(\" \")\n", " #print line\n", " rainfall_zone1[i] = float(line[0])\n", " rainfall_zone2[i] = float(line[1])\n", " rainfall_zone3[i] = float(line[2])\n", " rainfall_zone4[i] = float(line[3])\n", "\n", "f.close() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yes, it's clunky, but as you can see, the columns may be of different lengths and you will get the correctly sized arrays for each column. Of course, I could just have done:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "collapsed": false }, "outputs": [], "source": [ "zone1, zone2, zone3, zone4 = np.loadtxt(\"test_rainfile_hourly.txt\", usecols=(0,1,2,3), unpack=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In **one line**. But you live and learn. (This is the result of coming to Python after learning C++, which probably would require twice as many lines of code, and then you'd get a memory leak etc...)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Resources\n", "\n", "The np.loadtxt manual is here:\n", "http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.loadtxt.html\n", "\n", "A nice np.genfromtxt tutorial is here:\n", "http://students.mimuw.edu.pl/~pbechler/numpy_doc/user/basics.io.genfromtxt.html\n", "and the manual is here:\n", "http://docs.scipy.org/doc/numpy-1.10.1/user/basics.io.genfromtxt.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task\n", "Work with your own data. Try loading in a textfile that contains data you use, with an appropriate method from the ones discussed above." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }