{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Guide to using Python with Jupyter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this file you can find some of the most important things about how Python works and different functions that might be helpful with getting started. Also including some examples of how they work.\n", "\n", "About using this document: you should run the cell found in **section 2** first (every time you use this document) and then move to the section/example you wanted to check out. For this reason some functions are introduced multiple times throughout the document, so don't let it confuse you.\n", "\n", "If you can't remember how to do something in notebook, just press **H** while not in edit mode and you can see a list of shortcuts you can use in Jupyter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. [At first](#first)\n", "2. [Modules](#modules)\n", "3. [Data types and modifying data](#data)\n", "4. [Basic calculus and syntax](#basics)\n", "5. [Creating random data](#random)\n", "6. [Plotting diagrams](#plot)\n", "7. [Animations](#anim)\n", "8. [Maps and heatmaps](#maps)\n", "9. [Problems? Check this](#prblm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 1. At first" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In programming you can save different values in to **variables** which you can use or change later. Different kinds of variable are integers (int), floating-point numbers (float) or strings (str) for example. In Python creating variables is easy, since you don't have to initialize them.\n", "\n", "Sometimes bits of memory can be 'left' in the **kernel** running the program, which makes the program not run correctly. It happens regularly, and is nothing to worry about. Just press ***Kernel*** from the top bar menu and choose ***Restart & Clear output***. This resets the kernel memory and clears all output, after which you can start over again. This doesn't affect any changes in the text or code, so it's not for fixing those errors." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2. Modules" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Python is widely used in scientific community for computing, modifying and analyzing data, and for these purposes Python is greatly optimized. Part of Python is to use different kind of *modules*, which are files containing definitions (functions) and statements. These modules are imported using **import**-command, and even if at first it seems some kind of magic as to which modules to import, it gets easier with time.\n", "\n", "If you check the materials used in the Open Data -project, you'll probably notice that each Github-folder contains a text file 'requirements.txt'. These contains the module names used in the notebooks so for example [MyBinder](www.mybinder.org) can build a working platform for Jupyter. The most important modules we're going to use are:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Most essential modules:\n", "\n", "import pandas as pd # includes tools used in reading data\n", "import numpy as np # includes tools for numerical calculus\n", "import matplotlib.pyplot as plt # includes tools used in plotting data\n", "\n", "# Other useful modules:\n", "\n", "import random as rand # includes functions in generating random data\n", "from scipy import stats # includes tools for statistical analysis\n", "from scipy.stats import norm # tools for normal distribution\n", "import matplotlib.mlab as mlab # more plotting tools for more complicated diagrams\n", "\n", "# Not a module, but essential command which makes the output look prettier in notebooks\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember to run the cell above if you want the examples in this notebook to work. \n", "You can write the above ```import -- as``` shorter without **as**, which just renames the modules, but it makes your future much easier. If you want to read more about the used modules, select 'Help' from the top bar and you can find some links to documentation.\n", "\n", "Of course there are a lot of other modules as well, which you can easily google if need be. Thanks to Python being used so widely, you can find thousands of examples online. If you have some problems/questions, [StackExchange](https://stackexchange.com/) and [StackOverflow](https://stackoverflow.com/) are good places to start. Chances are that someone has ran in to the exact same you are facing before." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3. Data types and modifying data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary of data-manipulation:**\n", "\n", "Reading .csv $\\rightarrow$ \n", "``` Python \n", "name = pd.read_csv('path', varargin)\n", "``` \n", "Reading tables $\\rightarrow$ \n", "``` Python\n", "pd.read_table('path', varargin)\n", "``` \n", "Checking what's in the file $\\rightarrow$ \n", "``` Python\n", "name.head(n) \n", "``` \n", "Length $\\rightarrow$ \n", "``` Python\n", "len(name) \n", "``` \n", "Shape $\\rightarrow$ \n", "``` Python\n", "name.shape \n", "``` \n", "Columns $\\rightarrow$ \n", "``` Python\n", "name.column \n", "name['column'] \n", "``` \n", "Choosing data within limits $\\rightarrow$ \n", "``` Python\n", "name[(name.column >= lower_limit) & (name.column <= upper_limit)] \n", "``` \n", "Searching for text $\\rightarrow$ \n", "``` Python\n", "name['column'].str.contains('part_of_text') \n", "``` \n", "Add columns $\\rightarrow$ \n", "``` Python\n", "name = name.assign(column = info) \n", "``` \n", "Remove columns $\\rightarrow$ \n", "``` Python\n", "name.drop(['column1','column2'...], axis = 1)\n", "``` \n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Open data from CMS-experiment is in .csv (comma-separated-values) files. For a computer, this kind of data is easy to read using *pandas*-module. Saving the read file in a variable makes the variable type *dataframe*. If you're interested more in dataframes and what you can do to it, you can check [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) for more information.\n", "\n", "The easiest way to read data are **pandas.read_csv** and **pandas.read_tabe**. If the data if nice (as in separated by commas, headings are nice, fonts aren't too exotic..), you don't usually need any extra steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's load a dataset about particles and save it in to a variable:\n", "\n", "doublemu = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This kind of form ('...//opendata.cern...') fetches the data directly from the website. It could also be of the form **'Dimuon_doubleMu.csv'**, if the data you want to read is in the same folder with the notebook. Or if the file is in another folder it could be of the form **'../folder/data.csv'**.\n", "\n", "If the data is not in .csv, you can read it using the more broad **pandas.read_table**-command, which can read multiple types of files and not just csv. The most common problem is data being separated with something else than comma, such as ; or -. In this case you can put an extra argument in the command: **pandas.read_table('path', sep='x')**, with x being the separator. Another common problem is if the ordinal number of rows starts with something different than zero, or if the heading for columns is somewhere else than the first row. In this case you might want to put an extra argument **header = n**, where n is the number of the row headers are in. NOTE! In computing you always start counting at zero, unless otherwise mentioned.\n", "\n", "More information about possible arguments [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html). \n", "\n", "Under this you can see an example about data which doesn't have a line for headers. In the file there is saved data about the Sun since 1992. If you want to see what each column holds in, you can see the meaning in [here](http://sidc.oma.be/silso/infosndhem)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load a set of Sun's data and name it the way we want\n", "\n", "sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep = ';', encoding = \"ISO-8859-1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For clarity let's see what does the data look like. For this, the command **name.head(n)** is nice, which shows the first n rows of the chosen data. By assumption n = 5, in case you don't give it any value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "doublemu.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sunDat.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above you can see that **sunDat**-variable's first real row is now a header, which is nasty for 1) headings are confusing and 2) we are missing one line of data. Let's solve this by putting a header argument in read_table like this: **read_table('path', header = -1)**, which tells the program that header doesn't exist. The command automatically creates a new first line with numbers as headings." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep = ';', encoding = \"ISO-8859-1\", header = -1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sunDat.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which isn't very informative for us... We can of course rename them to make it easier (for a human) to read by using **names = ['name1', 'name2', 'name3'..]** command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep=';', encoding = \"ISO-8859-1\", header = None, \n", "names = ['Year','Month','Day','Fraction','$P_{tot}$','$P_{nrth}$','$P_{sth}$','$\\sigma_{tot}$','$\\sigma_{nrth}$',\n", " '$\\sigma_{sth}$','$N_{tot}$','$N_{nrth}$','$N_{sth}$','Prov'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sunDat.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Apart from **name.head()**-command there are couple more commands which are useful when checking out the shape of data. **len(name)** tells you the amount of rows (length of the variable) and **name.shape** tells both amount of rows and columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Usually the code sells show only the last line of the code in output. With print()-command you can get more of the values\n", "# visible. You can try what happens if you remove the print().\n", "\n", "print (len(sunDat))\n", "print (sunDat.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When the data is saved in a variable, we can start to modify it the way we want. More often than not, we are interested in a single variables in the data. In this case you want to be able to take single columns of the data, or choose just the rows where the values are within certain limits.\n", "\n", "You can choose a column by writing **data_name.column** or **data_name['column']**. The latter is useful if the column name starts with a number (in this scase the computer probably thinks the number as an ordinal number). If you want to make your life a bit easier and don't care about other columns or rows that you've chosen, you might want to save them to a new variable (with _very_ large datasets this might cause memory related problems, but you probably don't have to worry about it). You can do this by just writing **new_variable_name = data_name = data_name.column** and use the new variable instead. Using the new variable also helps in case of different errors, as smaller amount data is easier to handle and possible mistakes are easier to notice (for example if the program starts to draw a histogram of multiple variables and looks like it's stuck in an infinite loop)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's save the data of invariant masses (column named M in the data) in to a new variable \n", "\n", "invMass = doublemu.M" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "invMass.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An easy way to choose certain rows is to create a new variable, in which you save the values from the original data that fulfill certain conditions. In this case choosing values between limits would look like this:\n", "```Python\n", "new_var = name[(name.column >= lower_limit) & (name.column <= upper_limit)]\n", "```\n", "Of course the condition might be any other logical element, such as a certain number (value == number) or a piece of text in a non-numerical data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# As an example, let's isolate the rows from the original data, where both of the particles energies are at least 30 GeV\n", "\n", "highEn = doublemu[(doublemu.E1 >= 30) & (doublemu.E2 >= 30)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "highEn.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print ('Amount of particles with energy >= 30 GeV: ', len(highEn))\n", "print ('Amount of total particles: ',len(doublemu))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to search text, you can try **name.loc[ ]**-function:\n", "```Python\n", "new_var = old_var.loc[old_var['column'] == 'wanted_thing']\n", "```\n", "\n", "In this case you of course have to know exactly what you're looking for. If you want to choose rows more blindly (as in you know what the column _might_ contain), you can try **str.contains**-function (str.contains() actually returns a boolean value depending whether the column contains the text or not, that's why we have to choose the certain rows from the data for which the statement is true):\n", "\n", "```Python\n", "new_var = old_var[old_var['column'].str.contains('contained_text')]\n", "```\n", "This creates a new variable, that contains all the rows in which the 'column' contains 'contained_text' somewhere in it's value. By assumption str.contains() is case-sensitive, but it can be set off:\n", "```Python\n", "new_var = old_var[old_var['column'].str.contains('contained_text', case = False)]\n", "```\n", "\n", "Also negation works, as for example below where we delete all Ltd-companies (Oy or Oyj in Finnish, Ab in Swedish) from a data containing all Finnish companies producing alcoholic beverages. (This may also delete companies having -oy- somewhere in the name, so you should be careful with this method.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "alcBev = pd.read_csv('http://avoindata.valvira.fi/alkoholi/alkoholilupa_valmistus.csv', \n", " sep = ';', encoding = \"ISO-8859-1\", na_filter = False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sorry about the Finnish headings!\n", "\n", "alcBev.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "producers = alcBev[alcBev['Nimi'].str.contains('Oy|Ab') == False]\n", "producers.head()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to add or remove columns from the data, you can use **name = name.assign(column = information)** to add columns and \n", "**name.drop(['column1', 'column2',...], axis = 1)** to drop columns. In drop **axis** is meanful so the command targets columns specifically instead of rows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Removing a column with .drop.\n", "# Sometimes .drop doesn't work correctly (we don't know why, gotta look in to it), so let's just save the result to the old \n", "# variable to avoid it\n", "\n", "alcBev = alcBev.drop(['Nimi'], axis = 1)\n", "alcBev.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inserting a column using assign\n", "# Let's insert a column R with some numbers in it. Remember to check that the length of the column is correct\n", "\n", "numb = np.linspace(0, 100, len(alcBev))\n", " \n", "alcBev = alcBev.assign(R = numb)\n", "alcBev.head()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 4. Basic calculus and syntax" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary of basic calculus:**\n", "\n", "Absolute values $\\rightarrow$ \n", "```Python\n", "abs(x) \n", "``` \n", "Square root $\\rightarrow$\n", "```Python\n", "sqrt(x) \n", "``` \n", "Addition $\\rightarrow$ \n", "```Python\n", "x + y \n", "``` \n", "Substraction $\\rightarrow$\n", "```Python\n", "x - y \n", "``` \n", "Division $\\rightarrow$\n", "```Python\n", "x/y \n", "``` \n", "Multiplying $\\rightarrow$\n", "```Python\n", "x*y \n", "``` \n", "Powers $\\rightarrow$ \n", "```Python\n", "x**y \n", "``` \n", "Maximum value $\\rightarrow$\n", "```Python\n", "max(x) \n", "``` \n", "Minimum value $\\rightarrow$ \n", "```Python\n", "min(x) \n", "``` \n", "Creating own function $\\rightarrow$ \n", "```Python\n", "def name(input):\n", " do something to input\n", " return \n", " \n", "``` \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The basic operations are very basic, you write them as you would in any computer-based calculator. If you want the program to print out more than one thing, remember to use **print()**. You can also combine text and numbers. Function **repr(numbers)** might come in handy as it transforms the number to a more printable datatype. In [this](https://docs.python.org/3/library/functions.html) you can find all the functions you can use in Python without importing any modules. In [here](https://docs.python.org/3/library/stdtypes.html) you can find pretty much everything that's built-in in the Python interpreter, in case you're interested." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# You can change what kind of calculation (result) is saved in to the 'num'-variable\n", "\n", "num = 14*2+5/2**2\n", "text = 'The result of the day is: '\n", "print (text + repr(num))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# max() finds the largest number in the set\n", "\n", "bunch_of_numbers = [3,6,12,67,578,2,5,12,-34]\n", "\n", "print('The largest number is: ' + repr(max(bunch_of_numbers)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The more interesting case is creating your own functions in your own needs. This works by **defining** the function as follows:\n", "\n", "``` Python\n", "def funcName(input): \n", " do stuff\n", " return\n", "```\n", "\n", "Function doesn't actually have to return anything, if for example it's only used to print stuff." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's create a function that prints out half of the given number\n", "\n", "def divide_2(a):\n", " print(a/2)\n", " \n", "divide_2(6)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's make a addition-function, that asks the user for integers\n", "\n", "def add(x, y):\n", " summ = x + y\n", " text = '{} and {} together are {}.'.format(x, y, summ)\n", " print(text)\n", "\n", "def ownChoice():\n", " a = int(input(\"Give an integer: \"))\n", " b = int(input(\"And another one: \"))\n", " add(a, b)\n", "\n", "ownChoice() " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# How about a function that returns a given list of radians in degrees. While-loop loops through the list from the first (i=0)\n", "# element to the last one (len(list) - 1) and does the operation to each one\n", "\n", "def angling(a):\n", " b = a.copy() # list.copy() is useful so the original list doesn't change\n", " i=0\n", " while i < len(a):\n", " b[i] = b[i]*360/(2*np.pi)\n", " i+=1\n", " return b;\n", "\n", "rads = [5,2,4,2,1,3]\n", "angles = angling(rads)\n", "print('Radians: ', rads)\n", "print('Angles: ', angles)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The same using for-loop:\n", "\n", "def angling2(a):\n", " b = a.copy()\n", " for i in range(0,len(a)):\n", " b[i] = b[i]*360/(2*np.pi)\n", " return b;\n", " \n", "rad = [1,2,3,5,6]\n", "angle = angling2(rad)\n", "print('Radians: ', rad)\n", "print('Angles: ', angle)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 5. Creating random data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary:**\n", "\n", "Random integer between lower and upper $\\rightarrow$ \n", "```Python\n", "rand.randint(lower,upper)\n", "``` \n", "Random float between 0 and 1 $\\rightarrow$ \n", "```Python\n", "rand.random() \n", "``` \n", "Choose a random (non-uniform) sample $\\rightarrow$\n", "```Python\n", "rand.choices(set, probability, k = amount) \n", "``` \n", "Generate a random sample of a given size $\\rightarrow$ \n", "```Python\n", "rand.sample(set, k = amount) \n", "``` \n", "Normal distribution $\\rightarrow$\n", "```Python\n", "rand.normalvariate(mean, standard deviation) \n", "``` \n", "Evenly spaced numbers over interval $\\rightarrow$ \n", "```Python\n", "np.linspace(begin, end, num = number of samples) \n", "``` \n", "Evenly spaced numbers over interval $\\rightarrow$ \n", "```Python\n", "np.arange(begin, end, stepsize)\n", "``` \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is sometimes interesting and useful to generate simulated or random data among real data. Generating more complex simulations (such as [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) for example) are outside of the goals of this guide, we can still look at different ways to generate random numbers. Of course you have to remember that the usual random generation methods are pseudorandom, so you might not want to use these to hide your banking accounts or to generate safety numbers. Leave that to more complex and heavier methods (you probably should just forget it and leave it to professionals)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's generate a random integer between 1 and 100\n", "\n", "lottery = rand.randint(1,100)\n", "text = 'Winning number of the day is: '\n", "print (text + repr(lottery))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generate a random float number between 0 and 1 and multiply it by 5\n", "\n", "num = rand.random()*5\n", "print(num)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's pick random elements from a list, but make certain elements more likely\n", "\n", "kids = ['Pete','Jack','Ida','Nelly','Paula','Bob']\n", "probabilities = [10,30,20,20,5,5]\n", "\n", "# k is how many we want to choose, choices-command might take the same name multiple times\n", "\n", "names = rand.choices(kids, weights = probabilities, k = 3)\n", "print(names)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's do the same without multiple choices (this is useful for teachers to pick 'volunteers')\n", "\n", "volunteers = rand.sample(kids, k = 3)\n", "print (volunteers)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Random number from a given normal distribution (mean, standard dev.)\n", "\n", "num = rand.normalvariate(3, 0.1)\n", "print(num)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's create an evenly spaced list of numbers between 1 and 10, and randomize it a bit\n", "\n", "numbers = np.linspace(1, 10, 200)\n", "\n", "def randomizer(a):\n", " b = a.copy()\n", " \n", " for i in range(0,len(b)):\n", " b[i] = b[i]*rand.uniform(0,b[i])\n", " return b\n", "\n", "result = randomizer(numbers)\n", "# print(numbers)\n", "# print(result)\n", "\n", "fig = plt.figure(figsize=(15, 10))\n", "plt.plot(result,'g*')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Another method to create a list of evenly spaced numbers [a,b[ is by arange(a,b,c), where c is the stepsize.\n", "# Notice that b is not included in the result. (The result might be inconsistant if c is not an integer)\n", "\n", "numbers = np.arange(1,10,2)\n", "print(numbers)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 6. Plotting diagrams" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary:**\n", "\n", "Basic plot $\\rightarrow$\n", "```Python\n", "plt.plot(name, 'style and colour', varargin)\n", "``` \n", "\n", "Scatterplot $\\rightarrow$\n", "```Python\n", "plt.scatter(x-data, y-data, marker = 'markerstyle', color = 'colour', varargin)\n", "```\n", "\n", "Histogram $\\rightarrow$\n", "```Python\n", "plt.hist(data, amount of bins, range = (begin,end), varargin)\n", "```\n", "\n", "Legend $\\rightarrow$\n", "```Python\n", "plt.legend()\n", "```\n", "\n", "show plot $\\rightarrow$\n", "```Python\n", "plt.show()\n", "```\n", "\n", "Fitting normal distribution in data $\\rightarrow$\n", "```Python\n", "(mu, sigma) = norm.fit(data)\n", "... et cetera\n", "```\n", "\n", "Formatting $\\rightarrow$\n", "```Python\n", "plt.xlabel('x-axis name')\n", "plt.title('title name')\n", "fig = plt.figure(figsize = (horizontal size, vertical size))\n", "```\n", "\n", "Plotting errors$\\rightarrow$\n", "```Python\n", "plt.errorbar(val1, val2, xerr = err1, yerr = err2, fmt = 'none')\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Diagrams might very well be the reason to use programming in scientific teaching. Even for bigger datasets it is somewhat quick and effortless to create clarifying visualizations. Next we're going to see how plotting works with Python.\n", "\n", "You can freely (end easily) change the colours and markers of the diagrams. [Here](https://matplotlib.org/api/markers_api.html?highlight=markers#module-matplotlib.markers) you can find the most important things used in plotting data, which of course is different marker styles." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Basic diagram with plot-function. If the parameters contains only one line of data, x-axis is the ordinal numbers\n", "numbers = [1,3,54,45,52,34,4,1,2,3,2,4,132,12,12,21,12,12,21,34,2,8]\n", "plt.plot(numbers, 'b*')\n", "\n", "# plt.show() should always be used if you want to see what the plot looks like. Otherwise the output shows the memory\n", "# location of the picture among other things, which we probably don't want to look at. So use this\n", "\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# It's good practice to name different plots, so the readers can understand what's going on\n", "# Here you can see how to name different sets\n", "\n", "# Two random datasets\n", "\n", "result1 = np.linspace(10, 20, 50)*rand.randint(2,5)\n", "result2 = np.linspace(10, 20, 50)*rand.randint(2,5)\n", "\n", "# Draw them both\n", "\n", "plt.plot(result1, 'r^', label = 'Measurement 1')\n", "plt.plot(result2, 'b*', label = 'Measurement 2')\n", "\n", "# Name the axes and title, with fontsize-parameter you can change the size of the font\n", "plt.xlabel('Time (s)', fontsize = 15)\n", "plt.ylabel('Speed (m/s)', fontsize = 15)\n", "plt.title('Measurements of speed \\n', fontsize = 15) # \\n creates a new line to make the picture look prettier\n", "\n", "# Let's add legend. If the loc-parameter is not defined, legend is automatically placed somewhere where it fits, usually\n", "\n", "plt.legend(loc='upper left', fontsize = 15)\n", "\n", "# and show the plot\n", "\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Just as easily we can plot trigonometric functions\n", "# Let the x-axis be an evenly spaced number line\n", "\n", "x = np.linspace(0, 10, 100)\n", "\n", "# Define the functions we're going to plot\n", "\n", "y1 = np.sin(x)\n", "y2 = np.cos(x)\n", "\n", "# and draw\n", "\n", "plt.plot(x, y1, color = 'b', label = 'sin(x)')\n", "plt.plot(x, y2, color = 'g', label = 'cos(x)')\n", "\n", "plt.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The basic size of the pictures looks somewhat small. Figsize-command is going to help us making them the size we want\n", "\n", "x = np.linspace(0, 10, 100)\n", "\n", "y1 = np.sin(x)\n", "y2 = np.cos(x)\n", "\n", "# Here we define the size, you can try what different sizes look like\n", "\n", "fig = plt.figure(figsize=(15, 10))\n", "\n", "plt.plot(x, y1, color = 'b', label = 'sin(x)')\n", "plt.plot(x, y2, color = 'g', label = 'cos(x)')\n", "\n", "plt.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another traditional diagram is a [scatterplot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html), where both axes are variables. This is very common in for example physics research." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def randomizer(a):\n", " b = a.copy()\n", " for i in range(0,len(b)):\n", " b[i] = b[i]*rand.uniform(0,1)\n", " return b\n", "\n", "# Let's generate random data, where the other value is between 0 and 5, and the other between 0 and 20\n", "\n", "val1 = randomizer(np.linspace(3,5,100))\n", "val2 = randomizer(np.linspace(10,20,100))\n", "\n", "fig = plt.figure(figsize=(10,5))\n", "plt.scatter(val1, val2, marker ='*', color = 'b')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Another scatter-example. Now both values are scattered by normal distribution, not uniforlmy random\n", "\n", "def randomizer(a):\n", " b = a.copy()\n", " for i in range(0,len(b)):\n", " b[i] = b[i]*rand.normalvariate(1, 0.1)\n", " return b\n", "\n", "val1 = randomizer(np.linspace(3,5,100))\n", "val2 = randomizer(np.linspace(10,20,100))\n", "\n", "fig = plt.figure(figsize=(10,5))\n", "plt.scatter(val1, val2, marker ='*', color = 'b', label = 'Measurements')\n", "\n", "# Just for fun: let's fit a line there using linear regression\n", "\n", "slope, intercept, r_value, p_value, std_err = stats.linregress(val1, val2)\n", "plt.plot(val1, intercept + slope*val1, 'r', label='Linreg. fit')\n", "\n", "plt.legend(fontsize = 15)\n", "plt.show()\n", "\n", "# If you want to know more about the fitted line, you can write print(slope), print(r_value) etc." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another significant diagram is a histogram, which represents the amount of different results in the data. Histograms are fairly common, for example in (particle) physics, medical science and social sciences." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's make a random age distribution and create a histogram out of it\n", "def agegenerator(a):\n", " b = a.copy()\n", " for i in range(0, len(b)):\n", " b[i] = b[i]*rand.randint(1,100)\n", " return b;\n", "\n", "ages = agegenerator(np.ones(1000))\n", "\n", "fig = plt.figure(figsize = (10,5))\n", "plt.hist(ages, bins = 100, range = (0,110))\n", "\n", "plt.xlabel('Ages', fontsize = 15)\n", "plt.ylabel('Amount', fontsize = 15)\n", "plt.title('Age distribution in a sample of %i people \\n' %(len(ages)), fontsize = 15 ) \n", "\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's see what a histogram for particle collisions look like\n", "doublemu = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')\n", "\n", "# So this histogram is about the distribution of invariant masses (column M of the data)\n", "\n", "fig = plt.figure(figsize = (10,5))\n", "plt.hist(doublemu.M, bins = 300, range = (0,150))\n", "\n", "plt.xlabel('Invariant mass(GeV/$c^2$)', fontsize = 15)\n", "plt.ylabel('Number of events', fontsize = 15)\n", "plt.title('Distribution of invariant masses from muons \\n', fontsize = 15 ) \n", "\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's focus on the bump between 80 and 100 GeV. We could just set range = (80,100), but for the sake of example\n", "# we're going to crop the data and choose only the events in the specific range\n", "\n", "part = doublemu[(doublemu.M >= 80) & (doublemu.M <= 100)]\n", "\n", "\n", "fig = plt.figure(figsize = (10,5))\n", "plt.hist(part.M, bins = 200)\n", "\n", "plt.xlabel('Invariant mass (GeV/$c^2$)', fontsize = 15)\n", "plt.ylabel('Number of events', fontsize = 15)\n", "plt.title('Invariant mass distribution from muons \\n', fontsize = 15 ) \n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general making non-linear fits for the results requires more or less (more) coding, but in case of distributions (normal, as the invariant mass looks like, for example) Python has quite a lot of commands to make your life easier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Here we set the limits for the fit. It is good practice to set these in variables in case you want to change them later, \n", "# makes it much easier\n", "\n", "lower = 87\n", "upper = 95\n", "\n", "piece = doublemu[(doublemu.M > lower) & (doublemu.M < upper)]\n", "\n", "fig = plt.figure(figsize=(15,10))\n", "\n", "# Above is the limits for the normal fit, below are the limits of how wide are we going to draw the histogram.\n", "# Note that the fit isn't for everything that's seen on the histogram\n", "\n", "shw_lower = 80\n", "shw_upper = 100\n", "\n", "area = doublemu[(doublemu.M > shw_lower) & (doublemu.M < shw_upper)]\n", "\n", "# Because the shown histogram's area is equal to 1, we have to calculate a multiplier for the fitted curve\n", "\n", "multip = len(piece)/len(area)\n", "\n", "# standard deviation and variance for the fit\n", "\n", "(mu, sigma) = norm.fit(piece.M)\n", "\n", "# Let's draw the histogram\n", "\n", "n, bins, patches = plt.hist(area.M, 300, density = 1, facecolor = 'g', alpha=0.75, histtype = 'stepfilled')\n", "\n", "# And make the fit as well\n", "\n", "y_fit = multip*norm.pdf(bins, mu, sigma)\n", "line = plt.plot(bins, y_fit, 'r--', linewidth = 2)\n", "\n", "# This heading looks bad in the code, but beautiful in the final picture. \n", "\n", "plt.title(r'$\\mathrm{Histogram\\ of\\ invariant\\ masses\\ normed\\ to\\ one:}\\ \\mu=%.3f,\\ \\sigma=%.3f$'\n", " %(mu,sigma),fontsize=15)\n", "\n", "# While we're at it, let's give the plot a grid!\n", "\n", "plt.grid(True)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also draw a histogram out of data which has no numbers. Let's take a look at [collision data from London](http://roads.data.tfl.gov.uk)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Here's all the collisions from 2016, a bit over 40 000 different vehicles. Same events have the same AREFNO.\n", "\n", "traffic = pd.read_table('http://roads.data.tfl.gov.uk/AccidentStats/Prod/2016-gla-data-extract-vehicle.csv', sep = \",\")\n", "casualties = pd.read_table('http://roads.data.tfl.gov.uk/AccidentStats/Prod/2016-gla-data-extract-casualty.csv', sep = \",\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "traffic.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "casualties.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's check the collisions for ages between certain limits\n", "\n", "lower = 18\n", "upper = 25\n", "\n", "age_collisions = traffic.loc[(traffic['Driver Age'] <= upper) & (traffic['Driver Age'] >= lower)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# What does the vehicle distribution with this age group look like?\n", "\n", "fig = plt.figure(figsize=(10,5))\n", "plt.hist(age_collisions['Vehicle Type'])\n", "\n", "# We have to rotate the xticks to see what kind of vehicles are actually used\n", "plt.xticks(rotation = 40, ha='right')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Cars seems to dominate this statistic, which isn't too surprising. But ridden horse? We could dig deeper into this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take out all the horses from the data:\n", "\n", "horses = traffic.loc[traffic['Vehicle Type'] == '16 Ridden Horse']\n", "horses.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Hmm, same AREFNO, so the horses seems to have collided with each other (Veh. Impact: Front hit first, back hit first)\n", "# How severe was this collision?\n", "\n", "horseCasualties = casualties.loc[casualties['AREFNO'] == '0116TW60237']\n", "horseCasualties.head()\n", "\n", "# Protip: manually entering the ref# is not a good practice, particularly when working with larger datasets. In that \n", "# case you should make a reference to another table and compare the ref# to make this work automatically." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Luckily the collision wasn't too severe, and only one of the riders got hurt slightly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A word on errors when plotting data: in reality there's always some variance regarding how accurate a measurement is, or even how accurately you can measure something. These precision limits can be found out using statistical methods on the fits made for the data, they can be known for each data point separately (which often is the case in measurements made in schools). Let's make a example of this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# As you may have noticed by now, we have defined randomizer multiple times in this document. That's not how it should \n", "# be done, as it takes away the idea of functions. However it's done this way if someone wants to check out only\n", "# this example and not the first one where this function was introduced.\n", "\n", "def randomizer(a):\n", " b = a.copy()\n", " for i in range(0,len(b)):\n", " b[i] = b[i]*rand.normalvariate(1, 0.1)\n", " return b\n", "\n", "# Let's generate the random data\n", "\n", "val1 = randomizer(np.linspace(3,5,100))\n", "val2 = randomizer(np.linspace(10,20,100))\n", "\n", "# And let's give each datapoint a random error\n", "\n", "err1 = (1/5)*randomizer(np.ones(len(val1)))\n", "err2 = randomizer(np.ones(len(val2)))\n", "\n", "fig = plt.figure(figsize=(10,5))\n", "\n", "plt.scatter(val1, val2, marker ='*', color = 'b', label = 'Measurements')\n", "plt.errorbar(val1, val2, xerr = err1, yerr = err2, fmt = 'none')\n", "\n", "# Let's throw in a fit based on linear regression as well\n", "\n", "slope, intercept, r_value, p_value, std_err = stats.linregress(val1, val2)\n", "plt.plot(val1, intercept + slope*val1, 'r', label='Fit')\n", "\n", "plt.legend(fontsize = 15)\n", "plt.show()\n", "\n", "# If you want to know more of the mathematical values of the fit, you can write print(slope), print(std_err), etc.." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 7. Animations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can pretty easily also create animations using Python. This can be done with multiple different modules, but we recommend **NOT** to use plotly with Notebooks, as it slows down everything to the point nothing can be done. In this example we're going to create an animation of a histogram which nicely shows why more data = better results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')\n", "\n", "iMass = data.M" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's define the function that's going to upgrade the histogram\n", "# variable num is basically the frame number\n", "# So the way animations work is that this function calculates a new histogram for each frame \n", "\n", "def updt_hist(num, iMass):\n", " plt.cla()\n", " axes = plt.gca()\n", " axes.set_ylim(0,8000)\n", " axes.set_xlim(0,200)\n", " plt.hist(iMass[:num*480], bins = 120)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NOTE: cells including animations are $\\Large \\textbf{ slow }$ to run. The more frames the more time it takes to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Required for animations\n", "import matplotlib.animation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "fig = plt.figure()\n", " \n", "# fargs tells which variables the function (updt_hist) is going to take in, the empty variable is required\n", "# so the program knows that there's two variables used in the function. The other one is automatically\n", "# the current frame\n", "anim = matplotlib.animation.FuncAnimation(fig, updt_hist, frames = 200, fargs=(iMass, ) )\n", "\n", "# anim.to_jshtml() changes the animation to (javascript)html, so it can be shown on Notebook\n", "from IPython.display import HTML\n", "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above cell doesn't give output because of the ```%%capture``` -magic command. This is done because otherwise we'd get two different pictures of the animation. It looks prettier this way." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 8. Maps and heatmaps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using interactive maps in Jupyter Notebook so you can plot data on them? Yes please! Using them is much simpler than it sounds. In this example you'll see how. The data you're going to plot just needs to have latitude and longitude columns so you can plot it (or some other coordinate system from which you can calculate latitude and longitude)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Folium has maps:\n", "import folium\n", "\n", "# We're also going to need a way to plot a heatmap:\n", "from folium.plugins import HeatMap" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The data includes all earthquake data from the last month, chances are that the newest data of the set are from\n", "# last night or this morning\n", "quakeData = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_month.csv')\n", "quakeData.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This is required as the data we have now is in dataframe, and HeatMap-function reads lists\n", "\n", "# First let's make long enough list, in this variable we're going to save the data\n", "dat = [0]*len(quakeData)\n", "\n", "# The list is going to consist of tuples containing latitude, longitude and magnitude \n", "# (magnitude is not required, but it's nice to have in case you want to plot only quakes above \n", "# a certain magnitude for example)\n", "for i in range(0, len(quakeData)):\n", " dat[i] = [quakeData['latitude'][i], quakeData['longitude'][i], quakeData['mag'][i]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# There's some (one) data about earthquakes that don't include magnitude (saved as NaN) so\n", "# we have to remove these values\n", "\n", "dat = [x for x in dat if ~np.isnan(x[2])]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Different map tiles: https://deparkes.co.uk/2016/06/10/folium-map-tiles/\n", "# world_copy_jump = True tells us that the map can be scrolled to the side and the data can be seen there as well\n", "# If you want the map to be a 'single' map you can put an extra argument no_wrap = True\n", "# With control_scale you can see the scale on the bottom left corner\n", "\n", "m = folium.Map([15., -75.], tiles='openstreetmap', zoom_start=3, world_copy_jump = True, control_scale = True)\n", "\n", "HeatMap(dat, radius = 15).add_to(m)\n", "\n", "m" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check another example where we have to chance the coordinate system. This dataset uses a easting-northing system which isn't too different from easting-northing known from the [UTM](https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system). You can find more about the coordinate system on the page 38 on [this](https://www.ordnancesurvey.co.uk/docs/support/guide-coordinate-systems-great-britain.pdf) and see that the conversion isn't too trivial, if it interests you." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "collData = pd.read_csv('https://files.datapress.com/london/dataset/road-casualties-severity-borough/TFL-road-casualty-data-since-2005.csv')\n", "collData.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Luckily someone has encountered this grid system before and we don't have to do the conversion ourselves\n", "from OSGridConverter import grid2latlong" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Ignore collisions where the severity is 'slight'\n", "part = collData[collData['Casualty_Severity'] != '3 Slight']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# In this example the conversion is done in two steps just to show how it's done, makes \n", "# the code more readable\n", "\n", "# coords is used to temporarily store the lat&lon data as grid2latlong function returns just one row\n", "coords = [0]*len(part)\n", "\n", "# And we have to iterate the whole dataset row by row..\n", "# Plus since the coordinates in the datasets don't have the area (TQ, where London is located in), but\n", "# the area is told in the first 2 numbers in each easting and northing values we have\n", "# to choose everything else in them using syntax (name)[1:], which ignores the first 2 numbers\n", "# ALSO they are saved as integers and grid2latlong takes in strings, so we have to chance \n", "# the datatype by using str(value)\n", "# This cell might run for a while\n", "i = 0\n", "for index, row in part.iterrows():\n", " coords[i] = grid2latlong('TQ' + str(row['Easting'])[1:] + str(row['Northing'])[1:])\n", " i += 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Because of the type grid2latlong returns, we have to create a new variable (list) so we can use\n", "# the values with the map\n", "latlong = [[0,0]]*len(coords)\n", "\n", "# for each value in coords we choose it's latitude and longitude values and save them\n", "# in i:th row of latlong\n", "for i in range (0,len(coords)):\n", " latlong[i] = [coords[i].latitude,coords[i].longitude]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = folium.Map([51.5,-0.1], zoom_start=9, world_copy_jump = True, control_scale = True)\n", "\n", "HeatMap(latlong, radius = 10).add_to(m)\n", "\n", "m" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wouldn't it be nice if everyone used the same coordinate system?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 9. Problems? Check here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary:** \n", "\n", "Bohoo, I can't? \n", "Cell seems stuck and doesn't draw the plot or run the code? \n", "I get an error 'name is not defined' or 'name does not exist'? \n", "I tried to save something in to a variable but print(name) tells me None? \n", "My data won't load? \n", "The data I loaded contains some NaN values? \n", "I combined pieces of data but now I can't do things with the new variable? \n", "My code doesn't work, even if it's correctly written? \n", "The dates in the data are confusing the program, how do I fix this? \n", "I copied the data in to a new variable, but the changes to it also changes the original data?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Bohoo, I can't?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "No problem, nobody starts as a champion. You learn by doing and errors are part of it (some say 90% of coding is fixing errors..). \n", "\n", "Using Python there's this one great thing: there are A LOT of users. No matter the problem, chances are someone has faced it already and posted a solution online. Googling the problem usually gives the right answer within the first few results.\n", "\n", "Here's fixes to some common problematic situations (which we faced when making this document)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Cell seems stuck and doesn't draw the plot or run the code?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If running the cell takes longer than a few seconds, without it being needlessly complicated or handling **large** datasets, it's probably stuck in an infinite loop. You should stop the kernel (by choosing ***Kernel $\\rightarrow$ Interrupt*** from the top bar or pressing the square right below it) and check your code for possible errors. If you can't find the problem try to simplify the syntax, until you're positibe there's nothing wrong with your code. (Sometimes also just resetting the kernel and running the cells again makes it work.)\n", "\n", "One common problem is that a syntax-error makes the program do something wrong. For example: you're drawing a histogram but forgot to choose a specific column. Now the program tries to create a histogram of the whole data, which it obviously can't do without further specifications. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### I get an error 'name is not defined' or 'name does not exist'? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The variable you're referring to doesn't exist. Check that you've run the cell where the variable is defined during this session. Also make sure that the variable name is correct, as they are case-sensitive. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### I tried to save something in to a variable but print(name) tells me None? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There really isn't anything in the variable. Remember to save the changes you make in to a variable, for example\n", "\n", "```Python\n", "var = load(data)\n", "var = var*2\n", "```\n", "\n", "and not \n", "```Python\n", "var = load(data)\n", "var*2\n", "```\n", "\n", "Make sure that the operation you want to make is right so it doesn't delete the data by accident (or do anything unexpected)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### My data won't load? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check what text-based data (such as .csv) looks like using the most basic text editors. Now you can see how the data is separated, what rows contain the information you want or is the dataset even the one you wanted.\n", "\n", "Separators, headers and such can be defined in the arguments of the read_csv and read_table functions, for example\n", "```Python\n", "pd.read_csv('file.csv', sep = ';')\n", "```\n", "would load a csv file named file.csv (in the same folder), with ';' as a separator. More on this you can find in chapter 3 of this document." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The data I loaded contains some NaN values? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NaN stands for Not-A-Number, and it's commonly used in computer sciences. Either the data at that point is strange (like sqrt(-1)) or it simply doesn't exist. \n", "\n", "Functions usually don't care about these NaN-values, or you can put in an argument so the function ignores these. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### I combined pieces of data but now I can't do things with the new variable? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Did you combine different kinds of data types? Usually this isn't problem with Python as it automatically decides what the type is for variables, but sometimes it might create some problems if you combine integers, float or string type variables. In datasets sometimes even numbers are saved as a 'string', which is unfortunate to notice after doing something with the data. In Python there's different operators which can check what kind of type the variable is, such as isstring(). \n", "\n", "Did you combine the data correctly? If you wanted the columns next to each other, you probably shouldn't combine them the way where they are top of each other. You can check what your variable holds in with varname.shape() or varname.head() commands." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### My code doesn't work, even if it's correctly written? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the code once more. If there's a comma at the wrong place or a character in the variable's name is the wrong size, it creates problems.\n", "\n", "If the code _really_ doesn't work, even if it should, the reason might be found from the kernel. Try ***Restart & Clear Output*** from the Kernel menu in the top bar, this usually fixes this. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The dates in the data are confusing the program, how do I fix this? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you're probably aware, different kind of syntax for dates are being used all over the world. If the default settings don't make the data behave correctly, you can try to check from the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) of pandas.read_csv() how you can change the date settings. **dayfirst** or **date_parser** might solve the problem. There's also a Python module named **[time](https://docs.python.org/3/library/time.html)**, in which you can surely find the solutions for these kind of situations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### I copied the data in to a new variable, but the changes to it also changes the original data?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of saving the actualy data in to the new variable, Python copies a _pointer_ there. Pointers tell where the data is saved in the memory. When creating a new variable like this \n", "```Python\n", "new_var = old_var\n", "```\n", "Python just copies the the pointer, and the two variables are practically the same. However, if you only take part of the original data and save it in to a new variable, it creates a copy of the actual data instead of pointers. If you want the whole data in two places (if for example you want one variable to have the data multiplied and compare it to the original data, not sure why someone would want to do this but you never know when working with humans), you should use the command .copy():\n", "```Python\n", "new_var = old_var.copy()\n", "```\n", "This copies the actualy data in to a new memory location and changes to new_var won't affect old_var." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 2 }