{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "import numpy as np\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Q1: integrating a sampled vs. analytic function\n", "\n", "Numerical integration methods work differently depending on whether you have the analytic function available (in which case you can evaluate it freely at any point you please) or if it is sampled for you.\n", "\n", "Create a function to integrate, and use NumPy to sample it at $N$ points. Compare the answer you get from integrating the function directly (using `integrate.quad` to the integral of the sampled function (using `integrate.simps`).\n", "\n", "To get a better sense of the accuracy, vary $N$, and look at how the error changes (if you plot the error vs. $N$, you can measure the _convergence_)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q2: Condition number\n", "\n", "For a linear system, ${\\bf A x} = {\\bf b}$, we can only solve for $x$ if the determinant of the matrix ${\\bf A}$ is non-zero. If the determinant is zero, then we call the matrix _singular_. The _condition number_ of a matrix is a measure of how close we are to being singular. The formal definition is:\n", "\\begin{equation}\n", "\\mathrm{cond}({\\bf A}) = \\| {\\bf A}\\| \\| {\\bf A}^{-1} \\|\n", "\\end{equation}\n", "But we can think of it as a measure of how much ${\\bf x}$ would change due to a small change in ${\\bf b}$. A large condition number means that our solution for ${\\bf x}$ could be inaccurate.\n", "\n", "A _Hilbert matrix_ has $H_{ij} = (i + j + 1)^{-1}$, and is known to have a large condition number. Here's a routine to generate a Hilbert matrix" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def hilbert(n):\n", " \"\"\" return a Hilbert matrix, H_ij = (i + j - 1)^{-1} \"\"\"\n", "\n", " H = np.zeros((n,n), dtype=np.float64)\n", "\n", " for i in range(1, n+1):\n", " for j in range(1, n+1):\n", " H[i-1,j-1] = 1.0/(i + j - 1.0)\n", " return H" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's solve ${\\bf Hx} ={\\bf b}$. Create a linear system by picking an ${\\bf x}$ and generating a ${\\bf b}$ by multiplying by the matrix ${\\bf H}$. Then use the `scipy.linalg.solve()` function to recover ${\\bf x}$. Compute the error in ${\\bf x}$ as a function of the size of the matrix.\n", "\n", "You won't need a large matrix, $n \\sim 13$ or so, will start showing big errors.\n", "\n", "You can compute the condition number with `numpy.linalg.cond()`\n", "\n", "There are methods that can do a better job with nearly-singular matricies. Take a look at `scipy.linalg.lstsq()` for example." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Q3: damped driven pendulum and chaos" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "There are a large class of ODE integration methods available through the `scipy.integrate.ode()` function. Not all of them provide _dense output_ -- most will just give you the value at the end of the integration. \n", "\n", "The explicit `dopri5` integrator will store the solution at intermediate points and allow you to access them. We'll use that here. You'll need to use the `set_solout()` method to define a function that takes the current integration solution and store it).\n", "\n", "The damped driven pendulum obeys the following equations:\n", "$$\\dot{\\theta} = \\omega$$\n", "$$\\dot{\\omega} = -q \\omega - \\sin \\theta + b \\cos \\omega_d t$$\n", "here, $\\theta$ is the angle of the pendulum from vertical and $\\omega$ is the angular velocity. $q$ is a damping coefficient, $b$ is a forcing amplitude, and $\\omega_d$ is a driving frequency.\n", "\n", "Choose $q = 0.5$ and $\\omega_d = 2/3$.\n", "\n", "Integrate the system for different values of $b$ (start with $b = 0.9$ and increase by $0.05$, and plot the results ($\\theta$ vs. $t$). Here's a RHS function to get you started:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "def rhs(t, Y, q, omega_d, b):\n", " \"\"\" damped driven pendulum system derivatives. Here, Y = (theta, omega) are\n", " the solution variables. \"\"\"\n", " f = np.zeros_like(Y)\n", " \n", " f[0] = Y[1]\n", " f[1] = -q*Y[1] - np.sin(Y[0]) + b*np.cos(omega_d*t)\n", "\n", " return f" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Note that the pendulum can flip over, giving values of $\\theta$ outside of $[-\\pi, \\pi]$. The following function can be used to restrict it back to $[-\\pi, \\pi]$ for plotting." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "def restrict_theta(theta):\n", " \"\"\" convert theta to be restricted to lie between -pi and pi\"\"\"\n", " tnew = theta + np.pi\n", " tnew += -2.0*np.pi*np.floor(tnew/(2.0*np.pi))\n", " tnew -= np.pi\n", " return tnew" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Write a function that takes an initial angle, $\\theta_0$, and integrates the system and returns the solution. For the parameters that are part of the `rhs()` function, you'll need to use the `set_f_params()` method. \n", "\n", "Some values of $b$ will show very non-periodic behavior. To see chaos, integrate two different pendula that are the same except for $\\theta_0$, with only a small difference between then (like 60 degrees and 60.0001 degrees. You'll see the solutions track for a while, but then diverge." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Q4: Let's find the errors on our fit\n", "\n", "We looked at fits, but not what the errors are on the fit. Look at `scipy.optimize.curve_fit()`. This is a simplified wrapper on the least squares fitting. It can return the convariance matrix, the diagonals of which can give the error of the fit for the parameters. \n", "\n", "Make up some data that models a non-linear function (by introducing some random noise) and perform a fit and find the errors on the parameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.3" } }, "nbformat": 4, "nbformat_minor": 2 }