{ "cells": [ { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from datetime import datetime" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Sonification\n", "\n", "In this notebook, we use the miditime package to represent time-series data in sound. We use timidity to play the resulting midi file." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "60 0 3 200\n", "61 10 4 200\n" ] } ], "source": [ "# A demo script that creates two notes\n", "\n", "from miditime.miditime import MIDITime\n", "# NOTE: this import works at least as of v1.1.3; for older versions or forks of miditime, you may need to use\n", "# from miditime.MIDITime import MIDITime\n", "\n", "# Instantiate the class with a tempo (120bpm is the default) and an output file destination.\n", "mymidi = MIDITime(120, 'myfile.mid')\n", "\n", "# Create a list of notes. Each note is a list: [time, pitch, attack, duration]\n", "midinotes = [\n", " [0, 60, 200, 3], #At 0 beats (the start), Middle C with attack 200, for 3 beats\n", " [10, 61, 200, 4] #At 10 beats (12 seconds from start), C#5 with attack 200, for 4 beats\n", "]\n", "\n", "# Add a track with those notes\n", "mymidi.add_track(midinotes)\n", "\n", "# Output the .mid file\n", "mymidi.save_midi()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Your music file - with just two notes! - is now available for download. From the 'home' page for this notebook, select and then download the file. Double-click it on your machine. Whatever program you use for music on your own computer should be able to play it. A mid file is not, in itself, music (in the sense that an mp3 is a compressed representation of the music. It is more akin to the digital representation of a score that the computer plays with a default instrument (often, a piano)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing music\n", "\n", "Play with that script now, and add more notes. Try something simple - the notes for ‘Baa Baa Black Sheep’ are:\n", "\n", "```\n", "D, D, A, A, B, B, B, B, A\n", "Baa, Baa, black, sheep, have, you, any, wool?\n", "```\n", "\n", "Middle C is '60' - this [chart](https://web.archive.org/web/20171211192102/http://www.electronics.dit.ie/staff/tscarff/Music_technology/midi/midi_note_numbers_for_octaves.htm) show the numerical representation of each note on the 88 key keyboard." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's represent some data\n", "The miditime package is written with time series data in mind:\n", "\n", "```\n", "my_data = [\n", " {'event_date': , 'magnitude': 3.4},\n", " {'event_date': , 'magnitude': 3.2},\n", " {'event_date': , 'magnitude': 3.6},\n", " {'event_date': , 'magnitude': 3.0},\n", " {'event_date': , 'magnitude': 5.6},\n", " {'event_date': , 'magnitude': 4.0}\n", "]\n", "```\n", "Here we're dealing with earthquake data, and it's in json notation. The `datetime object` is a particular way of formatting the date: `datetime(1753,6,8)` would be June 8 1753. Following the example that [miditime provides](https://github.com/cirlabs/miditime), we can build up a sonification of this earthquake data like this:\n" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [], "source": [ "# tempo (120bpm is the default), an output file destination, \n", "# the number of seconds you want to represent a year in the final song (default is 5 sec/year),\n", "# the base octave (C5 is middle C, so the default is 5, \n", "# and how many octaves you want your output to range over (default is 1)\n", "from miditime.miditime import MIDITime\n", "mymidi = MIDITime(120, 'second-example.mid', 0.1, 5, 1)" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "my_data = [\n", " {'event_date': datetime(1792,6,8), 'magnitude': 3.4},\n", " {'event_date': datetime(1800,3,4), 'magnitude': 3.2},\n", " {'event_date': datetime(1810,1,16), 'magnitude': 3.6},\n", " {'event_date': datetime(1812,8,23), 'magnitude': 3.0},\n", " {'event_date': datetime(1813,10,10), 'magnitude': 5.6},\n", " {'event_date': datetime(1824,1,5), 'magnitude': 4.0}\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we convert those dates into an integer. Oddly enough, this is done by defining time since 'epoch'. This epoch date is Jan 1 1970. The reasons why have to do with the evolution of [unix](https://stackoverflow.com/questions/1090869/why-is-1-1-1970-the-epoch-time). For dates before 1970, we end up with a negative number, but this is not a problem.\n", "\n", "First we convert the date so that it is expressed with reference to the epoch:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "my_data_epoched = [{'days_since_epoch': mymidi.days_since_epoch(d['event_date']), 'magnitude': d['magnitude']} for d in my_data]" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'days_since_epoch': -64854.0, 'magnitude': 3.4},\n", " {'days_since_epoch': -62029.0, 'magnitude': 3.2},\n", " {'days_since_epoch': -58424.0, 'magnitude': 3.6},\n", " {'days_since_epoch': -57474.0, 'magnitude': 3.0},\n", " {'days_since_epoch': -57061.0, 'magnitude': 5.6},\n", " {'days_since_epoch': -53322.0, 'magnitude': 4.0}]" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_data_epoched\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then we convert that integer into something reasonable for music. Per the miditime package:\n", "\n", "> Convert your integer date/time to something reasonable for a song. For example, at 120 beats per minute, you'll need to scale the data down a lot to avoid a very long song if your data spans years. This uses the seconds_per_year attribute you set at the top, so if your date is converted to something other than days you may need to do your own conversion. But if your dataset spans years and your dates are in days (with fractions is fine), use the beat() helper method." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "my_data_timed = [{'beat': mymidi.beat(d['days_since_epoch']), 'magnitude': d['magnitude']} for d in my_data_epoched]" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'beat': -35.51, 'magnitude': 3.4},\n", " {'beat': -33.97, 'magnitude': 3.2},\n", " {'beat': -31.99, 'magnitude': 3.6},\n", " {'beat': -31.47, 'magnitude': 3.0},\n", " {'beat': -31.24, 'magnitude': 5.6},\n", " {'beat': -29.2, 'magnitude': 4.0}]" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_data_timed\n", "\n" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "# Get the earliest date in your series so you can set that to 0 in the MIDI:\n", "start_time = my_data_timed[0]['beat']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we define how the data get mapped against the pitch. If the data were percentages ranging from 0.01 (ie 1%) to 0.99 (99%), we would scale_pct below between 0 and 1. If you weren’t dealing with percentages, you’d use your lowest value and your highest value (say if for instance your data were counts of pottery over time)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [], "source": [ "# Set up some functions to scale your other variable (magnitude in our case) to match your desired mode/key and octave range. \n", "#There are helper methods to assist this scaling, very similar to a charting library like D3.\n", "# You can choose a linear or logarithmic scale.\n", "\n", "def mag_to_pitch_tuned(magnitude):\n", " # Where does this data point sit in the domain of your data? (I.E. the min magnitude is 3, the max in 5.6). In this case the optional 'True' means the scale is reversed, so the highest value will return the lowest percentage.\n", " scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude)\n", "\n", " # Another option: Linear scale, reverse order\n", " # scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude, True)\n", "\n", " # Another option: Logarithmic scale, reverse order\n", " # scale_pct = mymidi.log_scale_pct(3, 5.7, magnitude, True)\n", "\n", " # Pick a range of notes. This allows you to play in a key.\n", " c_major = ['C', 'D', 'E', 'F', 'G', 'A', 'B']\n", "\n", " #Find the note that matches your data point\n", " note = mymidi.scale_to_note(scale_pct, c_major)\n", "\n", " #Translate that note to a MIDI pitch\n", " midi_pitch = mymidi.note_to_midi_pitch(note)\n", "\n", " return midi_pitch" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "# Now build the note list\n", "note_list = []\n", "\n", "for d in my_data_timed:\n", " note_list.append([\n", " d['beat'] - start_time,\n", " mag_to_pitch_tuned(d['magnitude']),\n", " 100, # velocity\n", " 1 # duration, in beats\n", " ])" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[0.0, 62, 100, 1],\n", " [1.5399999999999991, 60, 100, 1],\n", " [3.5199999999999996, 62, 100, 1],\n", " [4.039999999999999, 60, 100, 1],\n", " [4.27, 71, 100, 1],\n", " [6.309999999999999, 64, 100, 1]]" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "note_list" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "62 0.0 1 100\n", "60 1.5399999999999991 1 100\n", "62 3.5199999999999996 1 100\n", "60 4.039999999999999 1 100\n", "71 4.27 1 100\n", "64 6.309999999999999 1 100\n" ] } ], "source": [ "# Add a track with those notes\n", "mymidi.add_track(note_list)\n", "\n", "# Output the .mid file\n", "mymidi.save_midi()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Download the file, and open in your computer's music program. How does it sound? It might not be 'music' - but that's not the point.\n", "\n", "Try feeding actual archaeological data that you've retrieved from other exercises into this program. Save each result under a different name; you can then begin to mix the data as unique voices using something like GarageBand. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For other approaches to sonification, please see [this tutorial](https://programminghistorian.org/en/lessons/sonification). For other creative uses of sonification by an undergraduate, see [Daniel Ruten's Sonic Word Clouds](https://programminghistorian.org/posts/sonic-word-clouds)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }