{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"# Data Loading and Cleaning"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Status\n",
"\n",
"It's now 2020.\n",
"Last year we didn't do a countdown.\n",
"This year, it's not A to Z, but the top 2020 songs of all time.\n",
"It just started this morning, \n",
"but I bet we can fit that into the same format."
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Overview\n",
"\n",
"Everyone of these countdowns is just a bit different,\n",
"and they are a bit of a surprise as they evolve.\n",
"So the data collection and clean up is usually evolving under time pressure.\n",
"But as we're up to year three, patterns emerge\n",
"and I've managed to clean some of it up.\n",
"\n",
"What follows is the latest start of the art,\n",
"with a bit of clutter from the accumulated countdown data."
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Setup\n",
"\n",
"Under the covers it mostly a combo of [requests](http://docs.python-requests.org/en/master/) and [lxml](http://lxml.de/) for web-scraping and [pandas](https://pandas.pydata.org/) for data munging. Before we get started, set up the imports."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"from IPython.display import display, HTML\n",
"import requests \n",
"from lxml import html\n",
"import pandas as pd\n",
"import numpy as np\n",
"from datetime import date, datetime, time\n",
"from os import path, mkdir\n",
"import re\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Set up cache/data directories\n",
"\n",
"Whenever possible, we'll cache the data.\n",
"Partially for speed when rerunning the notebooks during the countdown,\n",
"but also to make the notebooks reproducible later,\n",
"if the data ends up moving."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"cache_dir = './cache'\n",
"playlist_cache_dir = path.join(cache_dir, 'playlists')\n",
"a2z_cache_dir = path.join(cache_dir, 'a2z')\n",
"a2z70s_cache_dir = path.join(cache_dir, 'a2z70s')\n",
"a2z80s_cache_dir = path.join(cache_dir, 'a2z80s')\n",
"xpn8080_cache_idir = path.join(cache_dir, 'xpn2020')\n",
"musicbrainz_cache_dir = path.join(cache_dir, 'musicbrainz')\n",
"data_dir = './data'\n",
"\n",
"for d in (cache_dir, playlist_cache_dir, a2z_cache_dir, a2z70s_cache_dir,\n",
" a2z80s_cache_dir, data_dir, musicbrainz_cache_dir):\n",
" if not path.exists(d): mkdir(d)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Generic XPN Playlist scraping\n",
"\n",
"Originally I tended to rely on the one-off countdown pages for playlists.\n",
"But eventually I ended up using the generic playlist at [http://xpn.org/playlists/xpn-playlist](http://xpn.org/playlists/xpn-playlist).\n",
"I've done this enough, it's past time to turn it into something reusable."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def fetch_daily_playlist(day, cache_dir=None, verbose = False):\n",
" \"\"\"\n",
" Fetches the XPN playlist for a given date\n",
" \n",
" Args:\n",
" day (datetime.date) : The day to fetch the playlist for\n",
" cache_dir (string) : Path to the cache directory, or None to avoid caching\n",
" \n",
" Returns:\n",
" DataFrame containing Artist and Title as Strings and Airtime as Timestamp\n",
" \"\"\"\n",
" songs = pd.DataFrame(None, columns=['Artist', 'Title', 'Air Time'])\n",
" if cache_dir is not None:\n",
" cache_file = path.join(cache_dir, \"%04d-%02d-%02d.csv\" % \\\n",
" (day.year, day.month, day.day))\n",
" if cache_file is not None and path.exists(cache_file):\n",
" songs = pd.read_csv(cache_file, encoding='utf-8')\n",
" songs['Air Time'] = pd.to_datetime(songs['Air Time'], errors='coerce')\n",
" if verbose: print \"Got %d rows from %s\" % (len(songs), cache_file)\n",
" else:\n",
" day_s = '%02d-%02d-%04d' % (day.month, day.day, day.year)\n",
" page = requests.post('https://xpn.org/playlists/xpn-playlist',\n",
" data = {'playlistdate': day_s})\n",
" if verbose: print \"fetching %s returned status %s\" % (day_s, page.status_code)\n",
" \n",
" # play list pages claim to be utf-8, but the rare non-ascii character\n",
" # is always latin-1\n",
" #tree = html.fromstring(page.content.decode('latin-1'))\n",
" tree = html.fromstring(page.content)\n",
" tracks = tree.xpath('//h3/a/text()')\n",
" # not all rows are tracks, some are membership callouts\n",
" # but real tracks start with times and are formatted\n",
" # HH:MM [am|pm] Artist - Title\n",
" # Note that I've seen titles with embedded dashes,\n",
" # but so far no artist names with them. This may be luck.\n",
" # Special programs like World Cafe, Echos, ...\n",
" # also start with an air time, but don't have useful track info\n",
" # but those list the program inside bars\n",
" # eg |World Cafe| - \"Wednesday 11-2-2016 Hour 2, Part 7\"\n",
" date_regex = re.compile(\"^\\d{2}:\\d{2}\\s\")\n",
" line_count= 0\n",
" track_count = 0\n",
" for track in tracks:\n",
" line_count += 1\n",
" if date_regex.match(track) and track[9:10] != '|':\n",
" (artist, title) = track[9:].split(' - ', 1)\n",
" dt = datetime.strptime(track[:8], '%I:%M %p')\n",
" air_time = datetime.combine(day, dt.time())\n",
" if verbose: print \"adding %s %s %s\" % (artist, title, air_time)\n",
" songs = songs.append({'Artist': artist,\n",
" 'Title': title,\n",
" 'Air Time': air_time},\n",
" ignore_index = True)\n",
" if verbose: print \"size = %d\" % len(songs)\n",
" track_count += 1\n",
" \n",
" if verbose: print 'read %d line and added %d tracks' % (line_count, track_count)\n",
" # Drop any duplicates, which are not uncommon\n",
" songs = songs.drop_duplicates()\n",
" if cache_file is not None:\n",
" songs.to_csv(cache_file, index=False, encoding='utf-8')\n",
" if verbose: print 'write %d rows to %s' % (len(songs), cache_file)\n",
" \n",
" return songs\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def fetch_playlist(start, end, cache_dir=None):\n",
" \"\"\"\n",
" Fetch all the playlist entries for a range of time.\n",
" \n",
" Args:\n",
" start (datetime.datetime) : The inclusive start time to fetch entries for\n",
" end (datetime.datetime) : The exclusive end time to fetch entries for\n",
" cache_dir (string) : path to the cache directory, or None to avoid caching\n",
" \n",
" Returns:\n",
" Dataframe containing Artist and Title as strings, and Airtime as timestamp\n",
" \"\"\"\n",
" songs = pd.DataFrame(None, columns=['Artist', 'Title', 'Air Time'])\n",
" for day in pd.date_range(start.date(), end.date()):\n",
" songs = songs.append(fetch_daily_playlist(day, cache_dir), ignore_index=True)\n",
" songs = songs[songs['Air Time'] >= start]\n",
" songs = songs[songs['Air Time'] < end]\n",
" # sometimes the playlist entries are duplicated\n",
" song = songs.drop_duplicates()\n",
" songs = songs.sort_values(by = 'Air Time')\n",
" return songs"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Load the playlists\n",
"\n",
"Since this is the third time, and I've pulled the data prep into one notebook,\n",
"to save redundancy, there are a few play lists to load."
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"#### XPN 2020 \n",
"\n",
"For 2020, XPN is doing a listener curated \"top 2020 songs\" countdown.\n",
"It just started this morning, Thursday December 10 at 8:00 am.\n",
"\n",
"Two twists this year.\n",
"First, like 2018, there is a pause.\n",
"The countdown stopped at numbrer 101, just after midnight on December 17,\n",
"and picked back up for the last 100 at 8am.\n",
"Also, from 6am to 8am, there is a mini-list of One Vote Wonders,\n",
"songs that only got one vote in the XPN 2020 polling.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 1920 rows\n"
]
}
],
"source": [
"xpn2020 = fetch_playlist(datetime(2020, 12, 10, 8, 0), datetime(2020, 12, 17, 0, 41),\n",
" playlist_cache_dir)\n",
"print \"got %d rows\" % len(xpn2020)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 24 rows\n"
]
}
],
"source": [
"xpn2020_onsies = fetch_playlist(datetime(2020, 12, 17, 6, 0), datetime(2020, 12, 17, 8, 0),\n",
" playlist_cache_dir)\n",
"print \"got %d rows\" % len(xpn2020_onsies)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 100 rows\n"
]
}
],
"source": [
"xpn2020pt2 = fetch_playlist(datetime(2020, 12, 17, 8, 0), datetime(2020, 12, 17, 18, 46),\n",
" playlist_cache_dir)\n",
"print \"got %d rows\" % len(xpn2020pt2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Before going further, let's take a quick look at the data."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 194 | \n",
" Booker T. & The MG's | \n",
" Time Is Tight | \n",
" 2020-12-10 08:02:00 | \n",
"
\n",
" \n",
" 193 | \n",
" AC/DC | \n",
" T.N.T. | \n",
" 2020-12-10 08:05:00 | \n",
"
\n",
" \n",
" 192 | \n",
" Peter Frampton | \n",
" Show Me the Way | \n",
" 2020-12-10 08:11:00 | \n",
"
\n",
" \n",
" 191 | \n",
" The Drifters | \n",
" Under The Boardwalk | \n",
" 2020-12-10 08:16:00 | \n",
"
\n",
" \n",
" 190 | \n",
" Adele | \n",
" Rumor Has It | \n",
" 2020-12-10 08:19:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 2123 | \n",
" Foo Fighters | \n",
" Everlong | \n",
" 2020-12-17 00:08:00 | \n",
"
\n",
" \n",
" 2122 | \n",
" Bob Marley & The Wailers | \n",
" Three Little Birds | \n",
" 2020-12-17 00:12:00 | \n",
"
\n",
" \n",
" 2121 | \n",
" Pearl Jam | \n",
" Alive | \n",
" 2020-12-17 00:17:00 | \n",
"
\n",
" \n",
" 2120 | \n",
" Joni Mitchell | \n",
" Both Sides Now | \n",
" 2020-12-17 00:23:00 | \n",
"
\n",
" \n",
" 2119 | \n",
" Elton John | \n",
" Funeral For A Friend/Love Lies Bleeding | \n",
" 2020-12-17 00:30:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020.tail(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 123 | \n",
" J. J. Cale | \n",
" After Midnight | \n",
" 2020-12-17 06:03:00 | \n",
"
\n",
" \n",
" 122 | \n",
" Shocking Blue | \n",
" Venus | \n",
" 2020-12-17 06:05:00 | \n",
"
\n",
" \n",
" 121 | \n",
" Ben Folds Five | \n",
" Brick | \n",
" 2020-12-17 06:10:00 | \n",
"
\n",
" \n",
" 120 | \n",
" Sarah McLachlan | \n",
" Sweet Surrender | \n",
" 2020-12-17 06:14:00 | \n",
"
\n",
" \n",
" 119 | \n",
" World Party | \n",
" When The Rainbow Comes | \n",
" 2020-12-17 06:22:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020_onsies.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 104 | \n",
" Missy Elliott | \n",
" Work It | \n",
" 2020-12-17 07:32:00 | \n",
"
\n",
" \n",
" 103 | \n",
" Wu-Tang Clan | \n",
" C.R.E.A.M. (Cash Rules Everything Around Me) | \n",
" 2020-12-17 07:36:00 | \n",
"
\n",
" \n",
" 102 | \n",
" Robert Hazard | \n",
" Escalator Of Life | \n",
" 2020-12-17 07:42:00 | \n",
"
\n",
" \n",
" 101 | \n",
" Rickie Lee Jones | \n",
" Chuck E's In Love | \n",
" 2020-12-17 07:45:00 | \n",
"
\n",
" \n",
" 100 | \n",
" Taj Mahal | \n",
" Ain't Gwine Whistle Dixie | \n",
" 2020-12-17 07:52:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020_onsies.tail(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 99 | \n",
" The Doors | \n",
" Light My Fire | \n",
" 2020-12-17 08:02:00 | \n",
"
\n",
" \n",
" 98 | \n",
" Miles Davis | \n",
" All Blues | \n",
" 2020-12-17 08:10:00 | \n",
"
\n",
" \n",
" 97 | \n",
" John Prine | \n",
" Hello In There | \n",
" 2020-12-17 08:24:00 | \n",
"
\n",
" \n",
" 96 | \n",
" Joni Mitchell | \n",
" River | \n",
" 2020-12-17 08:28:00 | \n",
"
\n",
" \n",
" 95 | \n",
" Jason Isbell & The 400 Unit | \n",
" If We Were Vampires | \n",
" 2020-12-17 08:32:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020pt2.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 4 | \n",
" Bruce Springsteen | \n",
" Born To Run | \n",
" 2020-12-17 18:15:00 | \n",
"
\n",
" \n",
" 3 | \n",
" The Rolling Stones | \n",
" Gimme Shelter | \n",
" 2020-12-17 18:20:00 | \n",
"
\n",
" \n",
" 2 | \n",
" Bob Dylan | \n",
" Like A Rolling Stone | \n",
" 2020-12-17 18:26:00 | \n",
"
\n",
" \n",
" 1 | \n",
" John Lennon | \n",
" Imagine | \n",
" 2020-12-17 18:33:00 | \n",
"
\n",
" \n",
" 0 | \n",
" Bruce Springsteen | \n",
" Thunder Road | \n",
" 2020-12-17 18:39:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020pt2.tail(5).to_html())"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"#### XPN 80's A to Z playlist\n",
"\n",
"The 80's playlist started on Wednesday November 28 2018 at 8:00 am.\n",
"As of this writing it just ended yesterday.\n",
"However, something unusual happened this time:\n",
"we took a break from 1am to 6am on 12-09.\n",
"So it's easier to treat it as two playlists,\n",
"and merge them after we calculate durations.\n",
"The alternative is to allow passing in lists of breaks \n",
"to duration calculation, if there were more breaks, we might."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 3360 rows\n"
]
}
],
"source": [
"eighties = fetch_playlist(datetime(2018, 11, 28, 8, 0), datetime(2018,12,9, 1, 0),\n",
" playlist_cache_dir)\n",
"print \"got %d rows\" % len(eighties)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 71 rows\n"
]
}
],
"source": [
"eighties2 = fetch_playlist(datetime(2018, 12, 9, 6, 0), datetime(2018, 12, 9, 11, 49), playlist_cache_dir)\n",
"print \"got %d rows\" % len(eighties2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Before going an further, let's take a quick look at what we loaded:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 209 | \n",
" Warren Zevon | \n",
" A Certain Girl | \n",
" 2018-11-28 08:01:00 | \n",
"
\n",
" \n",
" 208 | \n",
" U2 | \n",
" A Day Without Me | \n",
" 2018-11-28 08:04:00 | \n",
"
\n",
" \n",
" 207 | \n",
" The Cure | \n",
" A Forest | \n",
" 2018-11-28 08:07:00 | \n",
"
\n",
" \n",
" 206 | \n",
" The Waterboys | \n",
" A Girl Called Johnny | \n",
" 2018-11-28 08:13:00 | \n",
"
\n",
" \n",
" 205 | \n",
" Romeo Void | \n",
" A Girl in Trouble (Is a Temporary Thing) | \n",
" 2018-11-28 08:18:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 3613 | \n",
" Crowded House | \n",
" World Where You Live | \n",
" 2018-12-09 00:38:00 | \n",
"
\n",
" \n",
" 3612 | \n",
" Captain Sensible | \n",
" Wot | \n",
" 2018-12-09 00:41:00 | \n",
"
\n",
" \n",
" 3611 | \n",
" Eurythmics | \n",
" Would I Lie To You? | \n",
" 2018-12-09 00:44:00 | \n",
"
\n",
" \n",
" 3610 | \n",
" Nik Kershaw | \n",
" Wouldn't It Be Good 12\" | \n",
" 2018-12-09 00:49:00 | \n",
"
\n",
" \n",
" 3609 | \n",
" Black Flag | \n",
" Wound Up | \n",
" 2018-12-09 00:56:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties.tail(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 160 | \n",
" Ringo Starr | \n",
" Wrack My Brain | \n",
" 2018-12-09 06:01:00 | \n",
"
\n",
" \n",
" 159 | \n",
" The Fabulous Thunderbirds | \n",
" Wrap It Up | \n",
" 2018-12-09 06:03:00 | \n",
"
\n",
" \n",
" 158 | \n",
" The Police | \n",
" Wrapped Around Your Finger | \n",
" 2018-12-09 06:06:00 | \n",
"
\n",
" \n",
" 157 | \n",
" Bruce Springsteen | \n",
" Wreck On The Highway | \n",
" 2018-12-09 06:12:00 | \n",
"
\n",
" \n",
" 156 | \n",
" Neil Young | \n",
" Wrecking Ball | \n",
" 2018-12-09 06:16:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties2.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" count | \n",
" 3360 | \n",
" 3360 | \n",
" 3360 | \n",
"
\n",
" \n",
" unique | \n",
" 1088 | \n",
" 3288 | \n",
" 3360 | \n",
"
\n",
" \n",
" top | \n",
" Bruce Springsteen | \n",
" Heartbeat | \n",
" 2018-12-08 04:08:00 | \n",
"
\n",
" \n",
" freq | \n",
" 50 | \n",
" 3 | \n",
" 1 | \n",
"
\n",
" \n",
" first | \n",
" | \n",
" | \n",
" 2018-11-28 08:01:00 | \n",
"
\n",
" \n",
" last | \n",
" | \n",
" | \n",
" 2018-12-09 00:56:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties.describe(include='all', percentiles=[]).to_html(na_rep=''))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"#### 80s Leftovers\n",
"\n",
"In 2016, there was a follow on \"leftovers\" list for parentheticals,\n",
"numbers and other random non-alphabeticals.\n",
"In 2018, the playlist transitioned right into the leftovers.\n",
"But since it doesn't align with any of the comparisons,\n",
"I'm going to treat it separately for now."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 42 rows\n"
]
}
],
"source": [
"eighties_leftovers = fetch_playlist(datetime(2018, 12, 9, 11, 50), datetime(2018, 12, 9, 15, 0),\n",
" playlist_cache_dir)\n",
"print \"got %d rows\" % len(eighties_leftovers)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 89 | \n",
" Minutemen | \n",
" #1 Hit Song | \n",
" 2018-12-09 11:50:00 | \n",
"
\n",
" \n",
" 88 | \n",
" The Blow Monkeys with Curtis Mayfield | \n",
" (Celebrate) The Day After You | \n",
" 2018-12-09 11:52:00 | \n",
"
\n",
" \n",
" 87 | \n",
" Ministry | \n",
" (Every Day Is) Halloween | \n",
" 2018-12-09 12:00:00 | \n",
"
\n",
" \n",
" 86 | \n",
" Cutting Crew | \n",
" (I Just) Died in Your Arms | \n",
" 2018-12-09 12:07:00 | \n",
"
\n",
" \n",
" 85 | \n",
" Joan Armatrading | \n",
" (I Love It When You) Call Me Names | \n",
" 2018-12-09 12:12:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties_leftovers.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" count | \n",
" 42 | \n",
" 42 | \n",
" 42 | \n",
"
\n",
" \n",
" unique | \n",
" 41 | \n",
" 42 | \n",
" 42 | \n",
"
\n",
" \n",
" top | \n",
" U2 | \n",
" (It's Not Me) Talking | \n",
" 2018-12-09 13:51:00 | \n",
"
\n",
" \n",
" freq | \n",
" 2 | \n",
" 1 | \n",
" 1 | \n",
"
\n",
" \n",
" first | \n",
" | \n",
" | \n",
" 2018-12-09 11:50:00 | \n",
"
\n",
" \n",
" last | \n",
" | \n",
" | \n",
" 2018-12-09 14:57:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties_leftovers.describe(include='all', percentiles=[]).to_html(na_rep=''))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"#### XPN 70's A to Z playlist\n",
"\n",
"The 70s AtoZ started at 6:00 am on Nov 29 2107,\n",
"and ended at 7:00 pm on Dec 12 2017.\n"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 4157 rows\n"
]
}
],
"source": [
"seventies = fetch_playlist(datetime(2017, 11, 29, 6, 0), datetime(2017, 12, 12, 19, 0), playlist_cache_dir)\n",
"\n",
"# Cover what looks like a Free at Noon slid into the play list\n",
"seventies = seventies[seventies['Title'] != 'The Runner']\n",
"\n",
"print \"got %d rows\" % len(seventies)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 219 | \n",
" Steeleye Span | \n",
" A Calling-On Song | \n",
" 2017-11-29 06:02:00 | \n",
"
\n",
" \n",
" 218 | \n",
" Joni Mitchell | \n",
" A Case Of You | \n",
" 2017-11-29 06:03:00 | \n",
"
\n",
" \n",
" 217 | \n",
" Boz Scaggs | \n",
" A Clue | \n",
" 2017-11-29 06:07:00 | \n",
"
\n",
" \n",
" 216 | \n",
" Todd Rundgren | \n",
" A Dream Goes On Forever | \n",
" 2017-11-29 06:13:00 | \n",
"
\n",
" \n",
" 215 | \n",
" Lou Reed | \n",
" A Gift | \n",
" 2017-11-29 06:16:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(seventies.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" count | \n",
" 4157 | \n",
" 4157 | \n",
" 4157 | \n",
"
\n",
" \n",
" unique | \n",
" 1028 | \n",
" 4000 | \n",
" 4154 | \n",
"
\n",
" \n",
" top | \n",
" David Bowie | \n",
" She's Gone | \n",
" 2017-12-10 23:17:00 | \n",
"
\n",
" \n",
" freq | \n",
" 63 | \n",
" 3 | \n",
" 2 | \n",
"
\n",
" \n",
" first | \n",
" | \n",
" | \n",
" 2017-11-29 06:02:00 | \n",
"
\n",
" \n",
" last | \n",
" | \n",
" | \n",
" 2017-12-12 18:54:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(seventies.describe(include='all', percentiles=[]).to_html(na_rep=''))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### The Original A-Z Playlist\n",
"\n",
"The original A-Z playlist ran in 2016 from November 30 at 6:00 am\n",
"until December 17 at 1:30 pm."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"got 5691 rows\n"
]
}
],
"source": [
"originals = fetch_playlist(datetime(2016, 11, 30, 6, 0), datetime(2016, 12, 17, 13, 30), playlist_cache_dir)\n",
"\n",
"print \"got %d rows\" % len(originals)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" 245 | \n",
" Jackson 5 | \n",
" ABC | \n",
" 2016-11-30 06:01:00 | \n",
"
\n",
" \n",
" 244 | \n",
" Elvis Presley | \n",
" A Big Hunk O' Love | \n",
" 2016-11-30 06:04:00 | \n",
"
\n",
" \n",
" 243 | \n",
" Johnny Cash | \n",
" A Boy Named Sue (live) | \n",
" 2016-11-30 06:06:00 | \n",
"
\n",
" \n",
" 242 | \n",
" Joni Mitchell | \n",
" A Case Of You | \n",
" 2016-11-30 06:10:00 | \n",
"
\n",
" \n",
" 241 | \n",
" Ernie K-Doe | \n",
" A Certain Girl | \n",
" 2016-11-30 06:16:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(originals.head(5).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
"
\n",
" \n",
" \n",
" \n",
" count | \n",
" 5691 | \n",
" 5691 | \n",
" 5691 | \n",
"
\n",
" \n",
" unique | \n",
" 1658 | \n",
" 5294 | \n",
" 5687 | \n",
"
\n",
" \n",
" top | \n",
" The Beatles | \n",
" Hold On | \n",
" 2016-12-10 17:46:00 | \n",
"
\n",
" \n",
" freq | \n",
" 141 | \n",
" 5 | \n",
" 2 | \n",
"
\n",
" \n",
" first | \n",
" | \n",
" | \n",
" 2016-11-30 06:01:00 | \n",
"
\n",
" \n",
" last | \n",
" | \n",
" | \n",
" 2016-12-17 13:25:00 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(originals.describe(include='all', percentiles=[]).to_html(na_rep=''))"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Augmenting the Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Scraping the Playlist Specific Pages\n",
"\n",
"For the original, 70s, and 80s A-Z, but not the A-Z leftovers,\n",
"the station put up countdown specific pages with play lists\n",
"in a slightly different format.\n",
"One advantage of using them is that they only include tracks from the countdown,\n",
"avoiding any need for time checking the date range.\n",
"Another is that for the 70s A-Z,\n",
"they added lists by year.\n",
"Given the pain it was to search MusicBrainz for songs and\n",
"figure out the year, that's worth having.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"#### 70s A-Z Page\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"##### Alphabetical Lists\n",
"Now that I've moved to the main playlist,\n",
"I don't know that the alphabetical lists buy much.\n",
"Getting the first letter ourselves is pretty easy.\n",
"But since older versions of the code used it,\n",
"we'll at least archive them"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"#alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n",
"#\n",
"#seventies_by_letter = pd.DataFrame(None, columns = ['Title', 'Artist', 'Letter'])\n",
"#for letter in alphabet:\n",
"# cache_file = path.join(a2z70s_cache_dir, '%s.csv' % letter)\n",
"# if path.exists(cache_file):\n",
"# df = pd.read_csv(cache_file)\n",
"# else:\n",
"# rows = []\n",
"# page = requests.get('http://xpn.org/static/az2017.php?q=%s' % letter)\n",
"# tree = html.fromstring(page.content)\n",
"# songs = tree.xpath('//li/text()')\n",
"# for song in songs:\n",
"# rows.append(song.rsplit(' - ', 1) + [letter])\n",
"# df = pd.DataFrame(rows, columns=['Title', 'Artist', 'Letter'])\n",
"# df.to_csv(cache_file, index=False)\n",
"# seventies_by_letter = seventies_by_letter.append(df, ignore_index=True)\n",
"#\n",
"#print \"got %d songs by letter\" % len(seventies_by_letter)\n",
"# was 4202 before commenting out"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"##### Lists by Year"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"#years = map(str, range(1970,1980))\n",
"#seventies_by_year = pd.DataFrame(None, columns = ['Title', 'Artist', 'Year'])\n",
"#for year in years:\n",
"# cache_file = path.join(a2z70s_cache_dir, '%s.csv' % year)\n",
"# if path.exists(cache_file):\n",
"# df = pd.read_csv(cache_file)\n",
"# else:\n",
"# rows = []\n",
"# page = requests.get('http://xpn.org/static/az2017v2.php?q=%s' % year)\n",
"# tree = html.fromstring(page.content)\n",
"# songs = tree.xpath('//li/text()')\n",
"# for song in songs:\n",
"# rows.append(song.rsplit(' - ', 1) + [year])\n",
"# df = pd.DataFrame(rows, columns=['Title', 'Artist', 'Year'])\n",
"# df.to_csv(cache_file, index=False)\n",
"# seventies_by_year = seventies_by_year.append(df, ignore_index=True)\n",
"#\n",
"#seventies_by_year.to_csv(path.join(data_dir, 'seventies_by_year.csv'))\n",
"#print 'got %d songs by year' % len(seventies_by_year)\n",
"# was 3699 before comment out"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Best and Worst\n",
"\n",
"Before the A-Z countdowns, \n",
"there used to be the \"885\" countdowns.\n",
"Each year had a theme. 2014's theme was \"All Time Greatest and Wort Songs\",\n",
"where there was the traditional 885 countdown for best\n",
"and a side 88 Worst list.\n",
"As people comment on what got included in the A-Z countdowns,\n",
"which are curated by the station,\n",
"it's fun to compare against the best and worst which were based\n",
"on listener voting."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": true,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def fetch_best(playlist_url, pagecount):\n",
" \"\"\"\n",
" Fetch data from the 885 best or 88 worst playlists.\n",
" Both use the same format, just different urls and \n",
" more or fewer pages.\n",
" \n",
" Args:\n",
" playlist_url (string) : base url for the playlist\n",
" pagecount (int) : number of pages to ge\n",
" Returns:\n",
" DataFrame containing the track data\n",
" \"\"\"\n",
" \n",
" rows = []\n",
" \n",
" for page_no in range(1, pagecount + 1):\n",
" args = {'page': page_no}\n",
" page = requests.get(playlist_url, params = args)\n",
" tree = html.fromstring(page.content)\n",
" tracks = tree.xpath(\"//*/tr[@class='countdown']\")\n",
" for track in tracks:\n",
" artist = track.xpath('./td[2]/text()')[0]\n",
" title = track.xpath('./td[@class=\"song\"]/text()')[0]\n",
" rows.append([title, artist])\n",
" df = pd.DataFrame(rows, columns = ['Title', 'Artist'])\n",
" return df\n"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Title | \n",
" Artist | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Thunder Road | \n",
" Bruce Springsteen | \n",
"
\n",
" \n",
" 1 | \n",
" Like A Rolling Stone | \n",
" Bob Dylan | \n",
"
\n",
" \n",
" 2 | \n",
" Imagine | \n",
" John Lennon | \n",
"
\n",
" \n",
" 3 | \n",
" A Day In The Life | \n",
" The Beatles | \n",
"
\n",
" \n",
" 4 | \n",
" Born To Run | \n",
" Bruce Springsteen | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"best885_file = path.join(data_dir, '885best.csv')\n",
"if not path.exists(best885_file):\n",
" best885 = fetch_best('http://www.xpn.org/music-artist/885-countdown/2014/885-countdown-2014',18)\n",
" best885.to_csv(best885_file, index=False)\n",
"else:\n",
" best885 = pd.read_csv(best885_file)\n",
" \n",
"HTML(best885.head(5).to_html())\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Title | \n",
" Artist | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" We Built This City | \n",
" Starship | \n",
"
\n",
" \n",
" 1 | \n",
" Who Let The Dogs Out | \n",
" Baha Men | \n",
"
\n",
" \n",
" 2 | \n",
" Achy Breaky Heart | \n",
" Billy Ray Cyrus | \n",
"
\n",
" \n",
" 3 | \n",
" (You're) Having My Baby | \n",
" Paul Anka | \n",
"
\n",
" \n",
" 4 | \n",
" Macarena | \n",
" Los Del Rio | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"worst88_file = path.join(data_dir, '88worst.csv')\n",
"if not path.exists(worst88_file):\n",
" worst88 = fetch_best('http://www.xpn.org/music-artist/885-countdown/2014/885-countdown-2014-88-worst',2)\n",
" worst88.to_csv(worst88_file, index=False)\n",
"else:\n",
" worst88 = pd.read_csv(worst88_file)\n",
" \n",
"HTML(worst88.head(5).to_html())"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Putting it together\n",
"\n",
"One might think that we can just join up the data.\n",
"However there is a catch.\n",
"There are some cases where one or more of the URLs will return legitimate duplicates.\n",
"For example two entries for the same song / artist at the same time in the main playlist page.\n",
"However there are also valid entries for the same song / artist,\n",
"at different times, released in different years.\n",
"The catch is that there is no common key between our three sources to join on.\n",
"If we dedupe on title and artist we drop real tracks.\n",
"But doing a Cartesian product would generate 4 apparent tracks for two tracks.\n",
"So we need to build an artificial key."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"#seventies = seventies.sort_values(by='Air Time')\n",
"#seventies['Play'] = pd.Series([0 for x in range(len(seventies.index))], index=seventies.index)\n",
"#last = None\n",
"#count = 0\n",
"#for idx, row in seventies.iterrows():\n",
"# if last is None or last != (row['Title'], row['Artist']):\n",
"# last = (row['Title'], row['Artist'])\n",
"# count = 0\n",
"# else:\n",
"# count += 1\n",
"# seventies.loc[idx, 'Play'] = count\n",
"#\n",
"#seventies_by_letter = seventies_by_letter.drop_duplicates()\n",
"#\n",
"#seventies_by_year = seventies_by_year.sort_values(by=['Title', 'Artist'])\n",
"#seventies_by_year['Play'] = pd.Series([0 for x in range(len(seventies_by_year.index))], index=seventies_by_year.index)\n",
"#last = None\n",
"#count = 0\n",
"#for idx, row in seventies_by_year.iterrows():\n",
"# if last is None or last != (row['Title'], row['Artist']):\n",
"# last = (row['Title'], row['Artist'])\n",
"# count = 0\n",
"# else:\n",
"# count += 1\n",
"# seventies_by_year.loc[idx, 'Play'] = count\n",
"#\n",
"#seventies = seventies.merge(seventies_by_year, how='left', on=['Artist', 'Title', 'Play'])\n",
"#seventies = seventies.merge(seventies_by_letter, how='left', on=['Artist', 'Title'])\n",
"#seventies['Year'] = seventies['Year'].fillna(0.0).astype(int)\n",
"#seventies['Air Time'] = pd.to_datetime(seventies['Air Time'], errors='coerce')\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Extracting Initial Letters\n",
"\n",
"For the moment, let's ignore the countdown specific pages.\n",
"We likely need to resort to MusicBrainz for year of publication data\n",
"for the 80s countdown.\n",
"\n",
"And first letter is pretty easy.\n",
"Well nothing is ever 100% easy.\n",
"I've seen leading spaces (could be stripped during initial load)\n",
"and words that start with leading apostrophes such as *'Til*.\n",
"So we need to scan past any non-alphabetics."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def first_char(s):\n",
" for c in s:\n",
" if type(c) is str and c.isalpha():\n",
" return c.upper()\n",
" return s[0]\n",
" \n",
"originals = originals.join(originals.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"seventies = seventies.join(seventies.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"eighties = eighties.join(eighties.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"eighties2 = eighties2.join(eighties2.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"xpn2020 = xpn2020.join(xpn2020.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"xpn2020_onsies = xpn2020_onsies.join(xpn2020_onsies.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))\n",
"xpn2020pt2 = xpn2020pt2.join(xpn2020pt2.apply(lambda x: first_char(x[1]), axis=1).to_frame('Letter'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"For the non-alphabetic leftovers, we'll do first character instead,\n",
"so no skipping past non-alphabetics."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"eighties_leftovers = eighties_leftovers.join(eighties_leftovers.apply(lambda x: x[1][0].upper(), axis=1).to_frame('First Character'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Extracting First Words"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from nltk.tokenize import RegexpTokenizer\n",
"custom_tokenize = RegexpTokenizer(\"[\\w'\\-]+|[^\\w'\\s\\-]\").tokenize\n",
"originals = originals.join(originals.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"seventies = seventies.join(seventies.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"eighties = eighties.join(eighties.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"eighties2 = eighties2.join(eighties2.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"xpn2020 = xpn2020.join(xpn2020.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"xpn2020_onsies = xpn2020_onsies.join(xpn2020_onsies.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))\n",
"xpn2020pt2 = xpn2020pt2.join(xpn2020pt2.apply(lambda x: custom_tokenize(x[1])[0], axis=1).to_frame('First Word'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Estimating Durations\n",
"\n",
"Since we have air times, we can approximate durations by subtracting the air time from the next track's air times. There are a couple catches with this\n",
"- we need to pass in an explicit end time for the last track, but that's minor\n",
"- we need to add some logic to 'skip over' the free at noons that happen on Fridays form 12 noon till \"like 12:40 or so\" and don't appear in the playlist at all\n",
"- the granularity is a bit course, as it is on a one minute basis. We could be off by almost two minutes per song, but it ought to even out.\n",
"- there's no clear way to account for \"non-song time\" like station promos, hosts introducing songs, station ids, and so forth. Fortunately, the percentage of time that is really music is pretty high thanks to XPN being listener supported.\n"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def estimate_durations(playlist, end_time=None):\n",
" \"\"\"\n",
" Estimate the song durations\n",
" Args: \n",
" playlist (DataFrame): playlist with minimally an 'Air Time' attribute\n",
" end_time (datetime): end time of the play list, or None if still going\n",
" Return:\n",
" modified DataFrame with 'Duration' attribute added.\n",
" \"\"\"\n",
" \n",
" playlist['Duration'] = pd.Series([0 for x in range(len(playlist.index))], index=playlist.index)\n",
" previous = None\n",
" last_idx = None\n",
" for idx, row in playlist.iterrows():\n",
" if not previous is None:\n",
" if row['Air Time'].date().weekday() == 4 and previous.hour == 11 and row['Air Time'].hour == 12:\n",
" # We just fell into a free at noon\n",
" playlist.loc[last_idx, 'Duration'] = 60 - previous.minute\n",
" else:\n",
" # just subtract this start from the previous\n",
" delta = row['Air Time'] - previous\n",
" playlist.loc[last_idx, 'Duration'] = delta.seconds / 60\n",
" previous = row['Air Time']\n",
" last_idx = idx\n",
"\n",
" # fixup the last row\n",
" if end_time is not None: \n",
" delta = end_time - playlist.loc[last_idx,'Air Time']\n",
" playlist.loc[last_idx, 'Duration'] = delta.seconds / 60\n",
" \n",
" return playlist\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"orginals = estimate_durations(originals, datetime(2016, 12, 17, 13, 30))\n",
"seventies = estimate_durations(seventies, datetime(2017, 12, 12, 19, 0))\n",
"eighties = estimate_durations(eighties, datetime(2018, 12, 9, 1, 0))\n",
"eighties2 = estimate_durations(eighties2, datetime(2018, 12, 9, 11, 49))\n",
"eighties_leftovers = estimate_durations(eighties_leftovers, datetime(2018, 12, 9, 15, 0))\n",
"xpn2020 = estimate_durations(xpn2020,datetime(2020, 12, 17, 0, 41))\n",
"xpn2020_onsies = estimate_durations(xpn2020_onsies, datetime(2020, 12, 17, 8, 0))\n",
"xpn2020pt2 = estimate_durations(xpn2020pt2, datetime(2020, 12, 17, 18, 46))"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"And now we can concatenate the 80s back into one data frame.\n",
"And the same for 2020."
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"eighties = pd.concat([eighties, eighties2])\n",
"xpn2020 = pd.concat([xpn2020, xpn2020pt2])"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"And fix up the one remaining implausible duration.\n",
"I'm going to assume that no 24 minute cut of \n",
"Third World's You're Playing Us Too Close exists.\n",
"The longest I can find it 7 minutes.\n",
"Odds are we're still missing a couple tracks from where\n",
"the playlist feed died about that time."
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"eighties.loc[eighties['Title'] == \"You're Playing Us Too Close\", 'Duration' ] = 7"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### MusicBrainz Data\n",
"\n",
"[MusicBrainz](https://musicbrainz.org/) is an free online music database,\n",
"with an [external XML Web-service](https://wiki.musicbrainz.org/Development/XML_Web_Service/Version_2)\n",
"that is supported in [Python](https://www.python.org/)\n",
"via the [musicbrainzngs](https://pypi.org/project/musicbrainzngs/) library.\n",
"I'd originally used it to get publication year for the 2016 countdown,\n",
"but abandoned it in 2017 since the [2017 playlist page](http://xpn.org/music-artist/885-countdown/2017/xpn-a-z) had lists by year.\n",
"Since there's no year data on the [2018 playlist](http://www.xpn.org/music-artist/xpn-a-z),\n",
"I'm bringing it back.\n",
"\n",
"There are a couple of potential issues with querying MusicBrainz\n",
" \n",
" - MusicBrainz has its own rules about how to enter data,\n",
" that don't always match those at WXPN,\n",
" so sometimes searches fail for data mismatches.\n",
" - As a free volunteer based service, there's no guarantee that\n",
" the data is there, though their data-set is very complete.\n",
" - Finding the *right* recording is an art at best.\n",
" My general approach has been to look for the oldest official \n",
" release for any recording matching the title and artist.\n",
" That *mostly* works.\n",
"\n",
"So we'll get what we can programmatically via Musicbrainz.\n",
"Then we'll look up the outliers manually,\n",
"using some combination of Discos and random stuff we find on Google,\n",
"and prefill the cache file manually for those.\n",
"For some really deep cuts, I've resorted to reading the date\n",
"off of pictures of 45s for sale on EBay.\n",
"No one answer works, it's ugly, but sometimes so is the recording industry.\n",
"\n",
"One consequence is that we'll always lag on publication year data\n",
"during the running of the playlists.\n"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"def add_musicbrainz_data(playlist, min_year = 1900, cache_file = None):\n",
" \"\"\"\n",
" Add data from the musicbrainz database. Currently just first year of publication.\n",
" The input data frame should contain at least Title and Artist fields\n",
" and the resulting dataframe will have a new Year field.\n",
" The cache file if used, should have been generated by a previous run of\n",
" this function.\n",
" Using a cache is strongly encouraged,\n",
" as the MusicBrainz search interface is rate limited to one search per second\n",
" so this can be very slow for large playlists.\n",
" \n",
" Args:\n",
" playlist (Dataframe) : playlist to update\n",
" min_year (int) : miminum year to consider\n",
" cache_file (string) : path to cache file\n",
" \n",
" Returns:\n",
" Dataframe containing the augmented playlist\n",
" \"\"\"\n",
" import musicbrainzngs as mb\n",
" mb.set_useragent('xpn-a2z', '0.1','https://github.com/asudell/a2z')\n",
" \n",
" # keep a list of artists named differently\n",
" # at MusicBrainz than XPN, so we can 'fix' them\n",
" artist_names = {\n",
" \"R. E. M.\": \"REM\",\n",
" \"Run-DMC\": \"Run-D.M.C.\",\n",
" \"The Ramones\": \"Ramones\"\n",
" }\n",
" \n",
" # load the cache if we have one\n",
" if cache_file is not None and path.exists(cache_file):\n",
" years = pd.read_csv(cache_file, encoding='utf-8')\n",
" years = years.drop_duplicates()\n",
" else:\n",
" years = pd.DataFrame(None, columns=('Title','Artist', 'Year'))\n",
" \n",
" augmented = playlist.merge(years, how = 'left')\n",
" \n",
" # Lookup any unaugmented rows\n",
" new_mb_rows = []\n",
" for index, row in augmented[augmented['Year'].isnull()].iterrows():\n",
" if row['Artist'] in artist_names:\n",
" artist = artist_names[row['Artist']]\n",
" else:\n",
" artist = row['Artist']\n",
" result = mb.search_recordings(row['Title'],\n",
" artist = artist,\n",
" status = 'official',\n",
" strict = True,\n",
" limit = 25)\n",
" rel_year = None\n",
" for recording in result['recording-list']:\n",
" if recording['release-list']:\n",
" for release in recording['release-list']:\n",
" if 'date' in release and len(release['date']) > 0:\n",
" y = int(release['date'].split('-')[0])\n",
" if rel_year is None or rel_year > y:\n",
" if y >= min_year:\n",
" # assume years before 1900 are typos\n",
" rel_year = y\n",
" if rel_year is not None:\n",
" new_mb_rows.append([row['Title'], row['Artist'], rel_year])\n",
" \n",
" new_years = pd.DataFrame(new_mb_rows, columns=('Title','Artist', 'Year'))\n",
" # if we found new data, resave the cache and rebuild the augmented data\n",
" if len(new_years) > 0:\n",
" years = years.append(new_years, ignore_index=True)\n",
" years = years.drop_duplicates()\n",
" if cache_file is not None:\n",
" years.to_csv(cache_file, index=False, encoding='utf-8')\n",
" augmented = playlist.merge(years, how = 'left')\n",
" \n",
" return augmented\n",
" \n",
" \n",
" "
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"xpn2020 = add_musicbrainz_data(xpn2020, 1920, path.join(musicbrainz_cache_dir, 'xpn2020_years.csv'))\n",
"# save a copy of anything without a year for manual review\n",
"xpn2020_missing = xpn2020[xpn2020['Year'].isnull()][['Title', 'Artist']]\n",
"xpn2020_missing.to_csv(path.join(musicbrainz_cache_dir, 'xpn2020_need_years.csv'),\n",
" index=False, encoding='utf-8')\n",
"# need to do this?\n",
"mb_cache = pd.read_csv(path.join(musicbrainz_cache_dir, 'xpn2020_years.csv'))\n",
"mb_cache.to_csv(path.join(musicbrainz_cache_dir, 'xpn2020_years.csv'), index=False, encoding='utf-8')"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"xpn2020_onsies = add_musicbrainz_data(xpn2020_onsies, 1920, path.join(musicbrainz_cache_dir, 'xpn2020_onsies_years.csv'))\n",
"# save a copy of anything without a year for manual review\n",
"xpn2020_onsies_missing = xpn2020_onsies[xpn2020_onsies['Year'].isnull()][['Title', 'Artist']]\n",
"xpn2020_onsies_missing.to_csv(path.join(musicbrainz_cache_dir, 'xpn2020_onsies_need_years.csv'),\n",
" index=False, encoding='utf-8')\n",
"# need to do this?\n",
"mb_cache = pd.read_csv(path.join(musicbrainz_cache_dir, 'xpn2020_onsies_years.csv'))\n",
"mb_cache.to_csv(path.join(musicbrainz_cache_dir, 'xpn2020_onsies_years.csv'), index=False, encoding='utf-8')"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"eighties = add_musicbrainz_data(eighties, 1980, path.join(musicbrainz_cache_dir, '80s_years.csv'))\n",
"# Some recordings get released a lot, toss anything outside the 80s\n",
"# and we pick them up for manual review\n",
"for index, row in eighties.iterrows():\n",
" if row['Year'] < 1980 or row['Year'] > 1989:\n",
" eighties.at[index, 'Year'] = np.nan\n",
"# and save a copy of anything without a year for manual review\n",
"eighties_missing = eighties[eighties['Year'].isnull()][['Title', 'Artist']]\n",
"eighties_missing.to_csv(path.join(musicbrainz_cache_dir, '80s_need_years.csv'),\n",
" index=False, encoding='utf-8')\n",
"# finally, prune any out of range entries from the cache, as\n",
"# we will keep growing them and duplicating records on joins\n",
"mb_cache = pd.read_csv(path.join(musicbrainz_cache_dir, '80s_years.csv'))\n",
"mb_cache = mb_cache[(mb_cache['Year'] >= 1980) & (mb_cache['Year'] <= 1989)]\n",
"mb_cache.to_csv(path.join(musicbrainz_cache_dir, '80s_years.csv'), index=False, encoding='utf-8')\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"and the same for the leftovers ..."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"eighties_leftovers = add_musicbrainz_data(eighties_leftovers, 1980, path.join(musicbrainz_cache_dir, '80s_leftovers_years.csv'))\n",
"# Some recordings get released a lot, toss anything outside the 80s\n",
"# and we pick them up for manual review\n",
"for index, row in eighties_leftovers.iterrows():\n",
" if row['Year'] < 1980 or row['Year'] > 1989:\n",
" eighties_leftovers.at[index, 'Year'] = np.nan\n",
"# and save a copy of anything without a year for manual review\n",
"eighties_leftovers_missing = eighties_leftovers[eighties_leftovers['Year'].isnull()][['Title', 'Artist']]\n",
"eighties_leftovers_missing.to_csv(path.join(musicbrainz_cache_dir, '80s_leftovers_need_years.csv'),\n",
" index=False, encoding='utf-8')\n",
"# finally, prune any out of range entries from the cache, as\n",
"# we will keep growing them and duplicating records on joins\n",
"mb_cache = pd.read_csv(path.join(musicbrainz_cache_dir, '80s_leftovers_years.csv'))\n",
"mb_cache = mb_cache[(mb_cache['Year'] >= 1980) & (mb_cache['Year'] <= 1989)]\n",
"mb_cache.to_csv(path.join(musicbrainz_cache_dir, '80s_leftovers_years.csv'), index=False, encoding='utf-8')\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Just for reference the manual additions were\n",
" - [80s_manual_years.csv](cache/musicbrainz/80s_manual_years.csv)\n",
" - [80s_leftovers_manual_years.csv](cache/musicbrainz/80s_leftovers_manual_years.csv)\n",
" \n",
"The ones I couldn't ever find good years far are left in\n",
" - [80s_need_years.csv](cache/musicbrainz/80s_need_years.csv)\n",
" - [80s_leftovers_need_years.csv](cache/musicbrainz/80s_leftovers_need_years.csv)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Checking the Results"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" Letter | \n",
" First Word | \n",
" Duration | \n",
" Year | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Booker T. & The MG's | \n",
" Time Is Tight | \n",
" 2020-12-10 08:02:00 | \n",
" T | \n",
" Time | \n",
" 3 | \n",
" 1980 | \n",
"
\n",
" \n",
" 1 | \n",
" AC/DC | \n",
" T.N.T. | \n",
" 2020-12-10 08:05:00 | \n",
" T | \n",
" T | \n",
" 6 | \n",
" 1975 | \n",
"
\n",
" \n",
" 2 | \n",
" Peter Frampton | \n",
" Show Me the Way | \n",
" 2020-12-10 08:11:00 | \n",
" S | \n",
" Show | \n",
" 5 | \n",
" 1975 | \n",
"
\n",
" \n",
" 3 | \n",
" The Drifters | \n",
" Under The Boardwalk | \n",
" 2020-12-10 08:16:00 | \n",
" U | \n",
" Under | \n",
" 3 | \n",
" 1989 | \n",
"
\n",
" \n",
" 4 | \n",
" Adele | \n",
" Rumor Has It | \n",
" 2020-12-10 08:19:00 | \n",
" R | \n",
" Rumor | \n",
" 5 | \n",
" 2011 | \n",
"
\n",
" \n",
" 5 | \n",
" Smith | \n",
" Baby It's You | \n",
" 2020-12-10 08:24:00 | \n",
" B | \n",
" Baby | \n",
" 4 | \n",
" 1969 | \n",
"
\n",
" \n",
" 6 | \n",
" Aretha Franklin | \n",
" Call Me | \n",
" 2020-12-10 08:28:00 | \n",
" C | \n",
" Call | \n",
" 3 | \n",
" 1970 | \n",
"
\n",
" \n",
" 7 | \n",
" Marvin Gaye & Kim Weston | \n",
" It Takes Two | \n",
" 2020-12-10 08:31:00 | \n",
" I | \n",
" It | \n",
" 3 | \n",
" 1966 | \n",
"
\n",
" \n",
" 8 | \n",
" Curtis Mayfield | \n",
" Superfly | \n",
" 2020-12-10 08:34:00 | \n",
" S | \n",
" Superfly | \n",
" 7 | \n",
" 1973 | \n",
"
\n",
" \n",
" 9 | \n",
" Shawn Colvin | \n",
" I Don't Know Why | \n",
" 2020-12-10 08:41:00 | \n",
" I | \n",
" I | \n",
" 4 | \n",
" 1992 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020.head(10).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" Letter | \n",
" First Word | \n",
" Duration | \n",
" Year | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" J. J. Cale | \n",
" After Midnight | \n",
" 2020-12-17 06:03:00 | \n",
" A | \n",
" After | \n",
" 2 | \n",
" NaN | \n",
"
\n",
" \n",
" 1 | \n",
" Shocking Blue | \n",
" Venus | \n",
" 2020-12-17 06:05:00 | \n",
" V | \n",
" Venus | \n",
" 5 | \n",
" 1969 | \n",
"
\n",
" \n",
" 2 | \n",
" Ben Folds Five | \n",
" Brick | \n",
" 2020-12-17 06:10:00 | \n",
" B | \n",
" Brick | \n",
" 4 | \n",
" 1994 | \n",
"
\n",
" \n",
" 3 | \n",
" Sarah McLachlan | \n",
" Sweet Surrender | \n",
" 2020-12-17 06:14:00 | \n",
" S | \n",
" Sweet | \n",
" 8 | \n",
" 1997 | \n",
"
\n",
" \n",
" 4 | \n",
" World Party | \n",
" When The Rainbow Comes | \n",
" 2020-12-17 06:22:00 | \n",
" W | \n",
" When | \n",
" 5 | \n",
" 1990 | \n",
"
\n",
" \n",
" 5 | \n",
" Suzanne Vega | \n",
" Marlene On The Wall | \n",
" 2020-12-17 06:27:00 | \n",
" M | \n",
" Marlene | \n",
" 4 | \n",
" 1985 | \n",
"
\n",
" \n",
" 6 | \n",
" Big Joe Turner | \n",
" Shake Rattle And Roll | \n",
" 2020-12-17 06:31:00 | \n",
" S | \n",
" Shake | \n",
" 3 | \n",
" 1992 | \n",
"
\n",
" \n",
" 7 | \n",
" Wilbert Harrison | \n",
" Let's Work Together (Parts 1 & 2) | \n",
" 2020-12-17 06:34:00 | \n",
" L | \n",
" Let's | \n",
" 7 | \n",
" 1994 | \n",
"
\n",
" \n",
" 8 | \n",
" The Pretenders | \n",
" Talk Of The Town | \n",
" 2020-12-17 06:41:00 | \n",
" T | \n",
" Talk | \n",
" 3 | \n",
" 1980 | \n",
"
\n",
" \n",
" 9 | \n",
" The Proclaimers | \n",
" I'm Gonna Be (500 Miles) | \n",
" 2020-12-17 06:44:00 | \n",
" I | \n",
" I'm | \n",
" 7 | \n",
" 1987 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 48,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(xpn2020_onsies.head(10).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" Letter | \n",
" First Word | \n",
" Duration | \n",
" Year | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" Warren Zevon | \n",
" A Certain Girl | \n",
" 2018-11-28 08:01:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
" 1980 | \n",
"
\n",
" \n",
" 1 | \n",
" U2 | \n",
" A Day Without Me | \n",
" 2018-11-28 08:04:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
" 1980 | \n",
"
\n",
" \n",
" 2 | \n",
" The Cure | \n",
" A Forest | \n",
" 2018-11-28 08:07:00 | \n",
" A | \n",
" A | \n",
" 6 | \n",
" 1980 | \n",
"
\n",
" \n",
" 3 | \n",
" The Waterboys | \n",
" A Girl Called Johnny | \n",
" 2018-11-28 08:13:00 | \n",
" A | \n",
" A | \n",
" 5 | \n",
" 1983 | \n",
"
\n",
" \n",
" 4 | \n",
" Romeo Void | \n",
" A Girl in Trouble (Is a Temporary Thing) | \n",
" 2018-11-28 08:18:00 | \n",
" A | \n",
" A | \n",
" 7 | \n",
" 1984 | \n",
"
\n",
" \n",
" 5 | \n",
" The Smithereens | \n",
" A Girl Like You | \n",
" 2018-11-28 08:25:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
" 1989 | \n",
"
\n",
" \n",
" 6 | \n",
" Albert Collins | \n",
" A Good Fool Is Hard To Find | \n",
" 2018-11-28 08:29:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
" 1986 | \n",
"
\n",
" \n",
" 7 | \n",
" Phil Collins | \n",
" A Groovy Kind Of Love | \n",
" 2018-11-28 08:33:00 | \n",
" A | \n",
" A | \n",
" 5 | \n",
" 1988 | \n",
"
\n",
" \n",
" 8 | \n",
" The Weirdos | \n",
" A Life Of Crime | \n",
" 2018-11-28 08:38:00 | \n",
" A | \n",
" A | \n",
" 15 | \n",
" 1985 | \n",
"
\n",
" \n",
" 9 | \n",
" Erasure | \n",
" A Little Respect | \n",
" 2018-11-28 08:53:00 | \n",
" A | \n",
" A | \n",
" 2 | \n",
" 1988 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties.head(10).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" First Character | \n",
" Duration | \n",
" Year | \n",
"
\n",
" \n",
" \n",
" \n",
" count | \n",
" 42 | \n",
" 42 | \n",
" 42 | \n",
" 42 | \n",
" 42.000000 | \n",
" 42.000000 | \n",
"
\n",
" \n",
" unique | \n",
" 41 | \n",
" 42 | \n",
" 42 | \n",
" 12 | \n",
" | \n",
" | \n",
"
\n",
" \n",
" top | \n",
" U2 | \n",
" (It's Not Me) Talking | \n",
" 2018-12-09 13:51:00 | \n",
" ( | \n",
" | \n",
" | \n",
"
\n",
" \n",
" freq | \n",
" 2 | \n",
" 1 | \n",
" 1 | \n",
" 15 | \n",
" | \n",
" | \n",
"
\n",
" \n",
" first | \n",
" | \n",
" | \n",
" 2018-12-09 11:50:00 | \n",
" | \n",
" | \n",
" | \n",
"
\n",
" \n",
" last | \n",
" | \n",
" | \n",
" 2018-12-09 14:57:00 | \n",
" | \n",
" | \n",
" | \n",
"
\n",
" \n",
" mean | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" 4.523810 | \n",
" 1984.142857 | \n",
"
\n",
" \n",
" std | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" 1.699901 | \n",
" 2.824729 | \n",
"
\n",
" \n",
" min | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" 2.000000 | \n",
" 1980.000000 | \n",
"
\n",
" \n",
" 50% | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" 4.000000 | \n",
" 1984.000000 | \n",
"
\n",
" \n",
" max | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" 9.000000 | \n",
" 1989.000000 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(eighties_leftovers.describe(include='all', percentiles=[]).to_html(na_rep=''))"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" Letter | \n",
" First Word | \n",
" Duration | \n",
"
\n",
" \n",
" \n",
" \n",
" 219 | \n",
" Steeleye Span | \n",
" A Calling-On Song | \n",
" 2017-11-29 06:02:00 | \n",
" A | \n",
" A | \n",
" 1 | \n",
"
\n",
" \n",
" 218 | \n",
" Joni Mitchell | \n",
" A Case Of You | \n",
" 2017-11-29 06:03:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
"
\n",
" \n",
" 217 | \n",
" Boz Scaggs | \n",
" A Clue | \n",
" 2017-11-29 06:07:00 | \n",
" A | \n",
" A | \n",
" 6 | \n",
"
\n",
" \n",
" 216 | \n",
" Todd Rundgren | \n",
" A Dream Goes On Forever | \n",
" 2017-11-29 06:13:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 215 | \n",
" Lou Reed | \n",
" A Gift | \n",
" 2017-11-29 06:16:00 | \n",
" A | \n",
" A | \n",
" 7 | \n",
"
\n",
" \n",
" 214 | \n",
" Poco | \n",
" A Good Feelin' To Know | \n",
" 2017-11-29 06:23:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 213 | \n",
" Mac Davis | \n",
" A Little Less Conversation | \n",
" 2017-11-29 06:26:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 212 | \n",
" Neil Young | \n",
" A Man Needs A Maid | \n",
" 2017-11-29 06:29:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
"
\n",
" \n",
" 211 | \n",
" Lou Rawls | \n",
" A Natural Man | \n",
" 2017-11-29 06:33:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 210 | \n",
" David Bowie | \n",
" A New Career In A New Town | \n",
" 2017-11-29 06:36:00 | \n",
" A | \n",
" A | \n",
" 5 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(seventies.head(10).to_html())"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false,
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
" | \n",
" Artist | \n",
" Title | \n",
" Air Time | \n",
" Letter | \n",
" First Word | \n",
" Duration | \n",
"
\n",
" \n",
" \n",
" \n",
" 245 | \n",
" Jackson 5 | \n",
" ABC | \n",
" 2016-11-30 06:01:00 | \n",
" A | \n",
" ABC | \n",
" 3 | \n",
"
\n",
" \n",
" 244 | \n",
" Elvis Presley | \n",
" A Big Hunk O' Love | \n",
" 2016-11-30 06:04:00 | \n",
" A | \n",
" A | \n",
" 2 | \n",
"
\n",
" \n",
" 243 | \n",
" Johnny Cash | \n",
" A Boy Named Sue (live) | \n",
" 2016-11-30 06:06:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
"
\n",
" \n",
" 242 | \n",
" Joni Mitchell | \n",
" A Case Of You | \n",
" 2016-11-30 06:10:00 | \n",
" A | \n",
" A | \n",
" 6 | \n",
"
\n",
" \n",
" 241 | \n",
" Ernie K-Doe | \n",
" A Certain Girl | \n",
" 2016-11-30 06:16:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 240 | \n",
" Warren Zevon | \n",
" A Certain Girl | \n",
" 2016-11-30 06:19:00 | \n",
" A | \n",
" A | \n",
" 5 | \n",
"
\n",
" \n",
" 239 | \n",
" Sheryl Crow | \n",
" A Change | \n",
" 2016-11-30 06:24:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
"
\n",
" \n",
" 238 | \n",
" Sam Cooke | \n",
" A Change Is Gonna Come | \n",
" 2016-11-30 06:28:00 | \n",
" A | \n",
" A | \n",
" 3 | \n",
"
\n",
" \n",
" 237 | \n",
" The Beatles | \n",
" A Day In The Life | \n",
" 2016-11-30 06:31:00 | \n",
" A | \n",
" A | \n",
" 5 | \n",
"
\n",
" \n",
" 236 | \n",
" Ray Barretto | \n",
" A Deeper Shade Of Soul | \n",
" 2016-11-30 06:36:00 | \n",
" A | \n",
" A | \n",
" 4 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"execution_count": 52,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"HTML(originals.head(10).to_html())"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Saving the data"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {
"collapsed": false,
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"#originals_data_file = path.join(data_dir, 'A2Z.csv')\n",
"#originals.to_csv(originals_data_file, index=False)\n",
"#seventies_data_file = path.join(data_dir, '70sA2Z.csv')\n",
"#seventies.to_csv(seventies_data_file, index=False)\n",
"#eighties['Year'] = eighties['Year'].fillna(value=0).astype(int)\n",
"#eighties_data_file = path.join(data_dir, '80sA2Z.csv')\n",
"#eighties.to_csv(eighties_data_file, index=False, encoding='utf8')\n",
"#eighties_leftovers['Year'] = eighties_leftovers['Year'].fillna(value=0).astype(int)\n",
"#eighties_leftovers_data_file = path.join(data_dir, '80sLeftovers.csv')\n",
"#eighties_leftovers.to_csv(eighties_leftovers_data_file, index=False, encoding='utf8')xpn\n",
"xpn2020['Year'] = xpn2020['Year'].fillna(value=0).astype(int)\n",
"xpn2020_data_file = path.join(data_dir, 'xpn2020.csv')\n",
"xpn2020.to_csv(xpn2020_data_file, index=False, encoding='utf8')\n",
"xpn2020_onsies['Year'] = xpn2020_onsies['Year'].fillna(value=0).astype(int)\n",
"xpn2020_onsies_data_file = path.join(data_dir, 'xpn2020_onsies.csv')\n",
"xpn2020_onsies.to_csv(xpn2020_onsies_data_file, index=False, encoding='utf8')"
]
}
],
"metadata": {
"hide_code_all_hidden": false,
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}