{ "cells": [ { "cell_type": "markdown", "id": "e9ba95ee-0a54-4b65-b8a7-c75efadd3b8e", "metadata": { "tags": [] }, "source": [ "# Introduction to Thermodynamics and Statistical Physics" ] }, { "cell_type": "markdown", "id": "e2f0dd81-5bf0-4288-9ee0-ca17e6838d57", "metadata": {}, "source": [ "In this lecture, we are going to discuss entropy. In particular, we'll cover:\n", "\n", "* Entropy of mixing.\n", "* Entropy and probability.\n", "* Alternative Thermodynamics Potentials (internal energy and enthalpy).\n" ] }, { "cell_type": "markdown", "id": "0ff5e6fe-6d9a-4189-8300-71981631b0c5", "metadata": { "tags": [] }, "source": [ "## Entropy of mixing" ] }, { "cell_type": "markdown", "id": "fade2b75-8954-41ec-8168-a06c2c11ed81", "metadata": {}, "source": [ "Let us know consider 2 different ideal gases. They are initially in separate vessels with volume $x V$ an $(1-x) V$ respectively, where $0" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "x = np.arange(0.001,1.001,0.002)\n", "S = -(x*np.log(x)+(1-x)*np.log(1-x))\n", "\n", "plt.figure(figsize=[4,3],dpi=200)\n", "plt.plot(x,S)\n", "plt.axhline(np.log(2),linestyle='--',color='grey',alpha=0.5)\n", "plt.text(0,np.log(2)+0.03,\"ln(2)\")\n", "plt.ylim(0.,0.8)\n", "plt.xlabel(\"x\")\n", "plt.ylabel(r\"$\\Delta$S / ($k_{\\rm B} T)$\")\n", "plt.tight_layout()\n", "plt.savefig(\"Figures/Entropy_Mixing_Plot.png\")\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "1a506a9c-6495-46cf-a819-9075b36946aa", "metadata": {}, "source": [ "From this, it is clear that the maximum change in entropy occurs when $x=0.5$. We can also arrive at this same result by considering the microstates for the gases. Before mixing, we know that gas 1 is only in the first vessel, while gas 2 is only in the second vessel. After mixing, the number of microstates available to each particle increases by 2 - they can either be in the first vessel, or the second vessel. As such, the statistical weight, $\\Omega$, must be multipled by $2^N$ to account for these new microstates. This means that \n", "$$\n", " \\Delta S = k_{\\rm B} \\ln 2^N = N k_{\\rm B} \\ln 2\n", "$$\n" ] }, { "cell_type": "markdown", "id": "1642744e-54e4-45b3-b310-fd1edabedb62", "metadata": {}, "source": [ "## Combining Entropy & Probability" ] }, { "cell_type": "markdown", "id": "67383572-5541-4aa0-82e3-47c6298ffa5d", "metadata": {}, "source": [ "Consider again $S=k_{\\rm B} \\ln \\Omega$. How does the entropy account for microstates? Let's do an example to check.\n", "\n", "Imagine we have a system which has 5 macrostates (that is, these states are easily distinguishable by measuring some macrovariable such as energy), each of which is equally likely to occur. This means that\n", "$$\n", " S=k_{\\rm B} \\ln 5\n", "$$\n", "Now assume that each macrostate has 3 microstates (states which we cannot distinguish, so associated perhaps with particle positions or momenta) associated with it, which are also equally likely to occur. This means that there is an additional entropy associated with each of these microstates, $S_{\\rm micro} = k_{\\rm B} \\ln 3$.\n", "\n", "There are a total of 15 unique states the system can be in (5 macro, 3 micro for each macro). The total entropy of the system is thus given by \n", "$$\n", " S_{\\rm total}=k_{\\rm B} \\ln 15\n", "$$\n", "which can be decomposed into\n", "$$\n", " S_{\\rm total}=k_{\\rm B} \\ln 5+k_{\\rm B} \\ln 3\n", "$$\n", "This means that we can write\n", "$$\n", " S_{\\rm total} = S+S_{\\rm micro}\n", "$$\n", "The reasons for doing this will become clear in a second.\n", "\n", "Now, consider a system which has a total of N equally-likely microstates. These microstates are subdivided into groups corresponding to macrostates, such that a macrostate $i$ has a total of $n_i$ microstates associated with it. We require that\n", "$$\n", " \\sum _i n_i = N\n", "$$\n", "(That is, summing together the number of microstates in each macrostate is the same as the total number of microstates available). The probability that the system is in the $i$th macrostate is then given by\n", "$$\n", " P_i = \\frac{n_i}{N}.\n", "$$\n", "The total entropy of this system is\n", "$$\n", " S_{\\rm total} = k_{\\rm B} \\ln N\n", "$$\n", "The problem we are faced with is that, quite often, $N$ is difficult to measure. For examples like dipoles oriented in magnetic fields, we have a rough idea (it's of order $2^M$ for M particles), but this is only for a very specific example. In general, we have\n", "$$\n", " S_{\\rm total} = S+S_{\\rm micro}\n", "$$\n", "where $S$ is the entropy due to the number of macrostates while $S_{\\rm micro}$ is an unknown entropy contribution to the number of available microstates. This is given by\n", "$$\n", " S_{\\rm micro} = = \\sum_i P_i S_i\n", "$$\n", "Given this, we can do the following\n", "\\begin{align}\n", " S &= S_{\\rm total} - S_{\\rm micro}\\\\\n", " &= k_{\\rm B} (\\ln N - \\sum_i P_i \\ln n_i)\\\\\n", " &= k_{\\rm B} \\sum_i P_i( \\ln N - \\ln n_i)\\\\\n", "\\end{align}\n", "Now, $\\ln (N) - \\ln (n_i) = \\ln(N/n_i) = \\ln(1/P_i)= -\\ln(P_i)$. This gives us Gibbs' definition of Entropy\n", "$$\n", " S = - k_{\\rm B} \\sum_i P_i \\ln(P_i)\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "520029af-ac90-4f78-b325-e25f180f00f6", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }