{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to Numpy and Scipy\n",
"\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"nbsphinx": "hidden",
"tags": []
},
"outputs": [],
"source": [
"# Colab setup ------------------\n",
"import os, sys, subprocess\n",
"if \"google.colab\" in sys.modules:\n",
" cmd = \"pip install --upgrade watermark\"\n",
" process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
" stdout, stderr = process.communicate()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import scipy.special"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"In this lesson, you will learn about [Numpy](http://www.numpy.org), arguably *the* most important package for scientific computing, and Scipy, a package containing lots of goodies for scientific computing, like special functions and numerical integrators. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## A very brief introduction to Numpy arrays\n",
"\n",
"The central object for Numpy and Scipy is the `ndarray`, commonly referred to as a \"Numpy array.\" This is an array object that is convenient for scientific computing. We will go over it in depth in the next lesson, but for now, let's just create some Numpy arrays and see how operators work on them.\n",
"\n",
"Just like with type conversions with lists, tuples, and other data types we've looked at, we can convert a list to a Numpy array using\n",
"\n",
" np.array()\n",
" \n",
"Note that above we imported the Numpy package \"`as np`\". This is for convenience; it allow us to use `np` as a prefix instead of `numpy`. Numpy is in **very** widespread use, and the convention is to use the `np` abbreviation."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1, 2, 3, 4])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a Numpy array from a list\n",
"my_ar = np.array([1, 2, 3, 4])\n",
"\n",
"# Look at it\n",
"my_ar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that the list has been converted, and it is explicitly shown as an array. It has several attributes and lots of methods. The most important attributes are probably the data type of its elements and the shape of the array."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dtype('int64')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The data type of stored entries\n",
"my_ar.dtype"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(4,)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The shape of the array\n",
"my_ar.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are also lots of methods. The one I use most often is `astype()`, which converts the data type of the array."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1., 2., 3., 4.])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"my_ar.astype(float)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are many others. For example, we can compute summary statistics about the entries in the array."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n",
"1\n",
"10\n",
"2.5\n",
"1.118033988749895\n"
]
}
],
"source": [
"print(my_ar.max())\n",
"print(my_ar.min())\n",
"print(my_ar.sum())\n",
"print(my_ar.mean())\n",
"print(my_ar.std())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Importantly, Numpy arrays can be arguments to Numpy functions. In this case, these functions do the same operations as the methods we just looked at."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n",
"1\n",
"10\n",
"2.5\n",
"1.118033988749895\n"
]
}
],
"source": [
"print(np.max(my_ar))\n",
"print(np.min(my_ar))\n",
"print(np.sum(my_ar))\n",
"print(np.mean(my_ar))\n",
"print(np.std(my_ar))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Other ways to make Numpy arrays\n",
"\n",
"There are many other ways to make Numpy arrays besides just converting lists or tuples. Below are some examples."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# How long our arrays will be\n",
"n = 10\n",
"\n",
"# Make a Numpy array of length n filled with zeros\n",
"np.zeros(n)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make a Numpy array of length n filled with ones\n",
"np.ones(n)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make an empty Numpy array of length n without initializing entries\n",
"# (while it initially holds whatever values were previously in the memory\n",
"# locations assigned, ones will be displayed)\n",
"np.empty(n)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[0, 0],\n",
" [0, 0]])"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make a Numpy array filled with zeros the same shape as another Numpy array\n",
"my_ar = np.array([[1, 2], [3, 4]])\n",
"np.zeros_like(my_ar)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will talk more about the `np.random` submodule later in the course, but for now, we will show that you can make an array full of random numbers and use this array in our examples going further."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.72363619, 0.80625687, 0.75507222, 0.47529264, 0.21808614,\n",
" 0.10797734, 0.8419304 , 0.26319505, 0.56976174, 0.51605265,\n",
" 0.36873593, 0.06029085, 0.26310492, 0.01178828, 0.89250854,\n",
" 0.84298439, 0.8336817 , 0.1566005 , 0.21741298, 0.39289867,\n",
" 0.8466088 , 0.74271579, 0.06722195, 0.98538097])"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make an array of random numbers between zero and one\n",
"np.random.seed(3252)\n",
"rand_array = np.random.random(24)\n",
"\n",
"rand_array"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Slicing Numpy arrays\n",
"\n",
"We can slice Numpy arrays like lists and tuples. Here are a few examples."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.98538097, 0.06722195, 0.74271579, 0.8466088 , 0.39289867,\n",
" 0.21741298, 0.1566005 , 0.8336817 , 0.84298439, 0.89250854,\n",
" 0.01178828, 0.26310492, 0.06029085, 0.36873593, 0.51605265,\n",
" 0.56976174, 0.26319505, 0.8419304 , 0.10797734, 0.21808614,\n",
" 0.47529264, 0.75507222, 0.80625687, 0.72363619])"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Reversed array\n",
"rand_array[::-1]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.47529264, 0.56976174, 0.01178828, 0.21741298, 0.98538097])"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Every 5th element, starting at index 3\n",
"rand_array[3::5]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.36873593, 0.06029085, 0.26310492, 0.01178828, 0.89250854,\n",
" 0.84298439, 0.8336817 , 0.1566005 , 0.21741298, 0.39289867,\n",
" 0.8466088 ])"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Entries 10 to 20\n",
"rand_array[10:21]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fancy indexing\n",
"\n",
"Numpy arrays also allow **fancy indexing**, where we can slice out specific values. For example, say we wanted indices 1, 19, and 6 (in that order) from `rand_array`. We just index with a list of the indices we want."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.80625687, 0.39289867, 0.8419304 ])"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array[[1, 19, 6]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead of a list, we could also use a Numpy array."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.80625687, 0.39289867, 0.8419304 ])"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array[np.array([1, 19, 6])]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a very nice feature, we can use **Boolean indexing** with Numpy arrays. Say we only want the random numbers greater than 0.7. We an use the `>` operator to find out which entries in the array are in fact greater than 0.7."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([ True, True, True, False, False, False, True, False, False,\n",
" False, False, False, False, False, True, True, True, False,\n",
" False, False, True, True, False, True])"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array > 0.7"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This gives a Numpy array of `True`s and `False`s. We can then use this array of Booleans to index which entries in the array we want to keep."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.72363619, 0.80625687, 0.75507222, 0.8419304 , 0.89250854,\n",
" 0.84298439, 0.8336817 , 0.8466088 , 0.74271579, 0.98538097])"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Just slice out the big ones\n",
"rand_array[rand_array > 0.7]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to know the *indices* where the values are high, we can use the `np.where()` function."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(array([ 0, 1, 2, 6, 14, 15, 16, 20, 21, 23]),)"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.where(rand_array > 0.7)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Numpy arrays are mutable\n",
"\n",
"Yes, Numpy arrays are mutable. Let's look at some consequences."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1, 2, 6, 4])"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make an array\n",
"my_ar = np.array([1, 2, 3, 4])\n",
"\n",
"# Change an element\n",
"my_ar[2] = 6\n",
"\n",
"# See the result\n",
"my_ar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's try working attaching another variable to the Numpy array."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([1, 2, 6, 9])"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Attach a new variable\n",
"my_ar2 = my_ar\n",
"\n",
"# Set an entry using the new variable\n",
"my_ar2[3] = 9\n",
"\n",
"# Does the original change? (yes.)\n",
"my_ar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see how messing with Numpy in functions affects things."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.1, 0.2, 0.3, 0.4])"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Re-instantiate my_ar\n",
"my_ar = np.array([1, 2, 3, 4]).astype(float)\n",
"\n",
"# Function to normalize x (note that /= works with mutable objects)\n",
"def normalize(x):\n",
" x /= np.sum(x)\n",
"\n",
"# Pass it through a function\n",
"normalize(my_ar)\n",
"\n",
"# Is it normalized even though we didn't return anything? (Yes.)\n",
"my_ar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So, be careful when writing functions. What you do to your Numpy array inside the function will happen outside of the function as well. Always remember that:\n",
"\n",
"
\n",
" \n",
"**Numpy arrays are mutable.**\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Slices of Numpy arrays are **views**, not copies\n",
"\n",
"A very important distinction between Numpy arrays and lists is that slices of Numpy arrays are **views** into the original Numpy array, NOT copies."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[1, 2, 3, 4]\n",
"[1 9 3 4]\n"
]
}
],
"source": [
"# Make list and array\n",
"my_list = [1, 2, 3, 4]\n",
"my_ar = np.array(my_list)\n",
"\n",
"# Slice out of each\n",
"my_list_slice = my_list[1:-1]\n",
"my_ar_slice = my_ar[1:-1]\n",
"\n",
"# Mess with the slices\n",
"my_list_slice[0] = 9\n",
"my_ar_slice[0] = 9\n",
"\n",
"# Look at originals\n",
"print(my_list)\n",
"print(my_ar)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Messing with an element of a slice of a Numpy array messes with that element in the original! This is not the case with lists. Remember this!\n",
"\n",
"
\n",
" \n",
"**Slices of Numpy arrays are views, not copies.**\n",
"\n",
"
\n",
"\n",
"Fortunately, you can make a copy of an array using the `np.copy()` function."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"False"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make a copy\n",
"rand_array_copy = np.copy(rand_array)\n",
"\n",
"# Mess with an entry\n",
"rand_array_copy[10] = 2000\n",
"\n",
"# Check equality\n",
"np.allclose(rand_array, rand_array_copy)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So, messing with an entry in the copy did not affect the original."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mathematical operations with arrays"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Mathematical operations on arrays are done elementwise to all elements."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([5. , 3. , 2.33333333, 2. ])"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Divide one array be another\n",
"np.array([5, 6, 7, 8]) / np.array([1, 2, 3, 4])"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-2.89454476, -3.22502748, -3.02028886, -1.90117055, -0.87234457,\n",
" -0.43190937, -3.36772162, -1.05278019, -2.27904696, -2.06421061,\n",
" -1.47494371, -0.2411634 , -1.05241968, -0.0471531 , -3.57003416,\n",
" -3.37193757, -3.3347268 , -0.62640202, -0.86965191, -1.5715947 ,\n",
" -3.38643522, -2.97086315, -0.26888778, -3.94152387])"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Multiply by scalar\n",
"-4 * rand_array"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([5.23649337e-01, 6.50050142e-01, 5.70134051e-01, 2.25903092e-01,\n",
" 4.75615653e-02, 1.16591066e-02, 7.08846805e-01, 6.92716325e-02,\n",
" 3.24628441e-01, 2.66310340e-01, 1.35966185e-01, 3.63498667e-03,\n",
" 6.92241987e-02, 1.38963448e-04, 7.96571494e-01, 7.10622684e-01,\n",
" 6.95025179e-01, 2.45237178e-02, 4.72684032e-02, 1.54369369e-01,\n",
" 7.16746467e-01, 5.51626742e-01, 4.51879002e-03, 9.70975652e-01])"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Raise to power\n",
"rand_array**2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Indexing 2D Numpy arrays\n",
"\n",
"Numpy arrays need not be one-dimensional. We'll create a two-dimensional Numpy array by reshaping our `rand_array` array from having shape `(24,)` to having shape `(6, 4)`. That is, it will become an array with 6 rows and 4 columns."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[0.72363619, 0.80625687, 0.75507222, 0.47529264],\n",
" [0.21808614, 0.10797734, 0.8419304 , 0.26319505],\n",
" [0.56976174, 0.51605265, 0.36873593, 0.06029085],\n",
" [0.26310492, 0.01178828, 0.89250854, 0.84298439],\n",
" [0.8336817 , 0.1566005 , 0.21741298, 0.39289867],\n",
" [0.8466088 , 0.74271579, 0.06722195, 0.98538097]])"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# New 2D array using the reshape() method\n",
"rand_array_2d = rand_array.reshape((6, 4))\n",
"\n",
"# Look at it\n",
"rand_array_2d"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that it is represented as an array made out of a list of lists. If we had a list of lists, we would index it like this:\n",
"\n",
" list_of_lists[i][j]"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Make list of lists\n",
"list_of_lists = [[1, 2], [3, 4]]\n",
"\n",
"# Pull out value in first row, second column\n",
"list_of_lists[0][1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Though this will work with Numpy arrays, this is *not* how Numpy arrays are indexed. They are indexed much more conveniently."
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.8062568709113288"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array_2d[0, 1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We essentially have a tuple in the indexing brackets. Now, say we wanted row with index 2 (remember that indexing starting at 0)."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.56976174, 0.51605265, 0.36873593, 0.06029085])"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array_2d[2, :]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use Boolean indexing as before."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.72363619, 0.80625687, 0.75507222, 0.8419304 , 0.89250854,\n",
" 0.84298439, 0.8336817 , 0.8466088 , 0.74271579, 0.98538097])"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array_2d[rand_array_2d > 0.7]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that this gives a one-dimensional array of the entries greater than 0.7. If we wanted indices where this is the case, we can again use `np.where()`."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(array([0, 0, 0, 1, 3, 3, 4, 5, 5, 5]), array([0, 1, 2, 2, 2, 3, 0, 0, 1, 3]))"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.where(rand_array_2d > 0.7)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tuple of Numpy arrays is how we would pull those values out using fancy indexing."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.47529264, 0.8419304 , 0.26319505, 0.06029085, 0.39289867,\n",
" 0.8466088 ])"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_array_2d[(np.array([0, 1, 1, 2, 4, 5]), np.array([3, 2, 3, 3, 3, 0]))]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numpy arrays can be of arbitrary integer dimension, and these principles extrapolate to 3D, 4D, etc., arrays."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concatenating arrays\n",
"\n",
"We can tape two arrays together if we like. The `np.concatenate()` function accomplishes this. We simply have to pass it a tuple containing the Numpy arrays we want to concatenate."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.72363619, 0.80625687, 0.75507222, 0.47529264, 0.21808614,\n",
" 0.10797734, 0.8419304 , 0.26319505, 0.56976174, 0.51605265,\n",
" 0.36873593, 0.06029085, 0.26310492, 0.01178828, 0.89250854,\n",
" 0.84298439, 0.8336817 , 0.1566005 , 0.21741298, 0.39289867,\n",
" 0.8466088 , 0.74271579, 0.06722195, 0.98538097, 0.76464333,\n",
" 0.21468332, 0.46706424, 0.43516852, 0.218179 , 0.85635848,\n",
" 0.11666964, 0.19284475, 0.97663249, 0.51806187])"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"another_rand_array = np.random.random(10)\n",
"\n",
"combined = np.concatenate((rand_array, another_rand_array))\n",
"\n",
"# Look at it\n",
"combined"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Numpy has useful mathematical functions\n",
"\n",
"So far, we have not done much mathematics with Python. We have done some adding and division, but nothing like computing a logarithm or cosine. The Numpy functions also work elementwise on the arrays when it is intuitive to do so. That is, they apply the function to each entry in the array. Check it out."
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([2.06191712, 2.23950951, 2.12776518, 1.60848483, 1.2436942 ,\n",
" 1.1140225 , 2.32084282, 1.30108047, 1.7678458 , 1.67540119,\n",
" 1.44590573, 1.06214543, 1.30096321, 1.01185803, 2.44124594,\n",
" 2.32329025, 2.30177762, 1.1695283 , 1.24285727, 1.48126829,\n",
" 2.33172609, 2.10163537, 1.06953283, 2.67883224])"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Exponential\n",
"np.exp(rand_array)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-0.16489218, 0.34615756, 0.03186428, -0.98797431, 0.19917961,\n",
" 0.77855165, 0.54602806, -0.08281198, -0.90546344, -0.99491776,\n",
" -0.67873582, 0.9291022 , -0.08224763, 0.99725823, 0.78046395,\n",
" 0.55156407, 0.50189441, 0.55373776, 0.20332268, -0.78199416,\n",
" 0.57041499, -0.04575207, 0.91212083, 0.99578438])"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Cosine\n",
"np.cos(2*np.pi*rand_array)"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.85066809, 0.89791808, 0.86894891, 0.68941471, 0.46699694,\n",
" 0.32859906, 0.91756766, 0.51302539, 0.75482564, 0.71836805,\n",
" 0.6072363 , 0.24554195, 0.51293754, 0.10857383, 0.9447267 ,\n",
" 0.91814181, 0.91306172, 0.39572782, 0.46627565, 0.6268163 ,\n",
" 0.92011347, 0.8618096 , 0.25927195, 0.99266357])"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Square root\n",
"np.sqrt(rand_array)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can even do some matrix operations (which are obviously not done elementwise), like dot products."
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"8.279227344296817"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.dot(rand_array, rand_array)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numpy also has useful attributes, like `np.pi` (which we already snuck into a code cell above)."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"3.141592653589793"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.pi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scipy has even more useful functions (in modules)\n",
"\n",
"Scipy actually began life as a library of special functions that operate on Numpy arrays. For example, we can compute an error function using the `scipy.special` module, which contains lots of special functions. Note that you often have to individually import the Scipy module you want to use, for example with\n",
" \n",
"```python\n",
"import scipy.special\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.69386995, 0.74580509, 0.71440432, 0.49852153, 0.24223752,\n",
" 0.12136752, 0.7662166 , 0.29026648, 0.57962151, 0.53449285,\n",
" 0.39796154, 0.0679486 , 0.29017159, 0.01330103, 0.79312234,\n",
" 0.76680147, 0.7616034 , 0.17527083, 0.24151311, 0.42154482,\n",
" 0.76880477, 0.70644679, 0.07573775, 0.83654318])"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"scipy.special.erf(rand_array)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are many Scipy submodules which give plenty or rich functionality for scientific computing. You can check out the [Scipy docs](https://docs.scipy.org/doc/scipy-1.3.0/reference/) to learn about all of the functionality. In my own work, I use the following extensively.\n",
"\n",
"- `scipy.special`: Special functions.\n",
"- `scipy.stats`: Functions for statistical analysis.\n",
"- `scipy.optimize`: Numerical optimization.\n",
"- `scipy.integrate`: Numerical solutions to differential equations.\n",
"- `scipy.interpolate`: Smooth interpolation of functions.\n",
"\n",
"We will use `scipy.stats` and `scipy.optimize` in the second half of the course."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Numpy and Scipy are highly optimized\n",
"\n",
"Importantly, Numpy and Scipy routines are often *fast*. To understand why, we need to think a bit about how your computer actually runs code you write."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Interpreted and compiled languages\n",
"\n",
"We have touched on the fact that Python is an **interpreted language**. This means that the Python interpreter reads through your code, line by line, translates the commands into instructions that your computer's processor can execute, and then these are executed. It also does [garbage collection](https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)), which manages memory usage in your programs for you. As an interpreted language, code is often much easier to write, and development time is much shorter. It is often easier to debug. By contrast, with **compiled languages** (the dominant ones being Fortran, C, and C++), your entire source code is translated into machine code before you ever run it. When you execute your program, it is already in machine code. As a result, compiled code is often much faster than interpreted code. The speed difference depends largely on the task at hand, but there is often over a 100-fold difference.\n",
"\n",
"First, we'll demonstrate the difference between compiled and interpreted languages by looking at a function to sum the elements of an array. Note that Python is [dynamically typed](http://stackoverflow.com/a/34004445/2320823), so the function below works for multiple data types, but the C function works only for [double precision floating point](https://en.wikipedia.org/wiki/Double-precision_floating-point_format) numbers."
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"17\n"
]
}
],
"source": [
"# Python code to sum an array and print the result to the screen\n",
"print(sum(my_ar))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```C\n",
"/* C code to sum an array and print the result to the screen */\n",
"\n",
"#include \n",
"\n",
"void sum_array(double a[], int n);\n",
"\n",
"void sum_array(double a[], int n) {\n",
" int i; \n",
" double sum=0;\n",
" for (i = 0; i < n; i++){\n",
" sum += a[i];\n",
" }\n",
" printf(\"%g\\n\", sum);\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The C code won't even execute without another function called `main` to call it. You should notice the difference in complexity of the code. Interpreted code is very often much easier to write!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Numpy and Scipy use compiled code!\n",
"\n",
"Under the hood, when you call a Numpy or Scipy function, or use one of the methods, the Python interpreter passes the arrays into pre-compiled functions. (They are usually C or Fortran functions.) That means that you get to use an interpreted language with near-compiled speed! We can demonstrate the speed by comparing an explicit sum of elements of an array using a Python `for` loop versus Numpy. We will use the `np.random` module to generate a large array of random numbers. We then use the `%timeit` magic function to time the execution of the sum of the elements of the array."
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1.01 ms ± 40.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n"
]
}
],
"source": [
"# Make array of 10,000 random numbers\n",
"x = np.random.random(10000)\n",
"\n",
"# Sum with Python for loop\n",
"def python_sum(x):\n",
" x_sum = 0.0\n",
" for y in x:\n",
" x_sum += y\n",
" return x_sum\n",
"\n",
"# Test speed\n",
"%timeit python_sum(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll do the same test with the Numpy implementation."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"7.13 μs ± 150 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)\n"
]
}
],
"source": [
"%timeit np.sum(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wow! We went from a millisecond to *micro*seconds!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Word of advice: use Numpy and Scipy\n",
"\n",
"If you are writing code and you think to yourself, \"This seems like a pretty common things to do,\" there is a good chance the someone really smart has written code to do it. If it's something numerical, there is a good chance it is in Numpy or Scipy. **Use these packages.** Do not reinvent the wheel. It is very rare you can beat them for performance, error checking, etc.\n",
"\n",
"Furthermore, Numpy and Scipy are very well tested (and we learned the importance of that in the test-driven development lessons). In general, you do not need to write unit tests for well-established packages. Obviously, if you use Numpy or Scipy within your own functions, you still need to test what you wrote."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Computing environment"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Python implementation: CPython\n",
"Python version : 3.12.4\n",
"IPython version : 8.25.0\n",
"\n",
"numpy : 1.26.4\n",
"scipy : 1.13.1\n",
"jupyterlab: 4.0.13\n",
"\n"
]
}
],
"source": [
"%load_ext watermark\n",
"%watermark -v -p numpy,scipy,jupyterlab"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}