{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Possible Solution: Build A Pipeline\n",
    "\n",
    "- Combine Your Knowledge of the Website, `requests` and `bs4`\n",
    "- Automate Your Scraping Process Across Multiple Pages\n",
    "- Generalize Your Code For Varying Searches\n",
    "- Target & Save Specific Information You Want"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Your Tasks:\n",
    "\n",
    "- Scrape the first 100 available search results\n",
    "- Generalize your code to allow searching for different locations/jobs\n",
    "- Pick out information about the URL, job title, and job location\n",
    "- Save the results to a file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "from bs4 import BeautifulSoup"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "### Part 1: Inspect\n",
    "\n",
    "- How do the URLs change when you navigate to the next results page?\n",
    "- How do the URLs change when you use a different location and/or job title search?\n",
    "- Which HTML elements contain the link, title, and location of each job?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Next Page**: The `start=` parameter gets added and incremented by the value of `10` for each additional page. This is because each results page displays 10 job results.\n",
    "\n",
    "E.g.: <https://www.indeed.com/jobs?q=python&l=new+york&start=20>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Different Location/Job Title**: The values for the query parameters `q` (for job title) and `l` (for location) change accordingly."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "page = requests.get('https://www.indeed.com/jobs?q=python&l=new+york')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**HTML Elements**: A single job posting lives inside of a `div` element with the class name `result`. Inside there are other elements. You can find the specific info you're looking for here:\n",
    "\n",
    "- **Link**: In the `href` attribute of the `<a>` Element that is a child of the title `<h2>` element\n",
    "- **Title**: The text of the link in the `<h2>` element which also contains the link URL mentioned above\n",
    "- **Location**: A `<span>` element with the telling class name `location`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "### Part 2: Scrape\n",
    "\n",
    "- Build the code to fetch the first 100 search results. This means you will need to automatically navigate to multiple results pages\n",
    "- Write functions that allow you to specify the job title, location, and amount of results as arguments"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "page_2 = requests.get('https://www.indeed.com/jobs?q=python&l=new+york&start=20')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Every 10 results means you're on a new page. Let's make that an argument to a function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_jobs(page=1):\n",
    "    \"\"\"Fetches the HTML from a search for Python jobs in New York on Indeed.com from a specified page.\"\"\"\n",
    "    base_url_indeed = 'https://www.indeed.com/jobs?q=python&l=new+york&start='\n",
    "    results_start_num = page*10\n",
    "    url = f'{base_url_indeed}{results_start_num}'\n",
    "    page = requests.get(url)\n",
    "    return page"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "get_jobs(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "get_jobs(4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Great! Let's customize this function some more to allow for different search queries and search locations:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_jobs(title, location, page=1):\n",
    "    \"\"\"Fetches the HTML from a search for Python jobs in New York on Indeed.com from a specified page.\"\"\"\n",
    "    loc = location.replace(' ', '+')  # for multi-part locations\n",
    "    base_url_indeed = f'https://www.indeed.com/jobs?q={title}&l={loc}&start='\n",
    "    results_start_num = page*10\n",
    "    url = f'{base_url_indeed}{results_start_num}'\n",
    "    page = requests.get(url)\n",
    "    return page"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "get_jobs('python', 'new york', 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "With a generalized way of scraping the page done, you can move on to picking out the information you need by parsing the HTML."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "### Part 3: Parse\n",
    "\n",
    "- Sieve through your HTML soup to pick out only the job title, link, and location\n",
    "- Format the results in a readable format (e.g. JSON)\n",
    "- Save the results to a file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start by getting access to all interesting search results for one page:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "site = get_jobs('python', 'new york')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "soup = BeautifulSoup(site.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "results = soup.find(id='resultsCol')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "jobs = results.find_all('div', class_='result')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Job Titles** can be found like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_titles = [job.find('h2').find('a').text.strip() for job in jobs]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_titles"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Link URLs** need to be assembled, and can be found like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "base_url = 'https://www.indeed.com'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_links = [base_url + job.find('h2').find('a')['href'] for job in jobs]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_links"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Locations** can be picked out of the soup by their class name:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_locations = [job.find(class_='location').text for job in jobs]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_locations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's assemble all this info into a function, so you can pick out the pieces and save them to a useful data structure:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def parse_info(soup):\n",
    "    \"\"\"\n",
    "    Parses HTML containing job postings and picks out job title, location, and link.\n",
    "    \n",
    "    args:\n",
    "    soup (BeautifulSoup object): A parsed bs4.BeautifulSoup object of a search results page on indeed.com\n",
    "    \n",
    "    returns:\n",
    "    job_list (list): A list of dictionaries containing the title, link, and location of each job posting\n",
    "    \"\"\"\n",
    "    results = soup.find(id='resultsCol')\n",
    "    jobs = results.find_all('div', class_='result')\n",
    "    base_url = 'https://www.indeed.com'\n",
    "\n",
    "    job_list = list()\n",
    "    for job in jobs:\n",
    "        title = job.find('h2').find('a').text.strip()\n",
    "        link = base_url + job.find('h2').find('a')['href']\n",
    "        location = job.find(class_='location').text\n",
    "        job_list.append({'title': title, 'link': link, 'location': location})\n",
    "\n",
    "    return job_list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's give it a try:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "page = get_jobs('python', 'new_york')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "soup = BeautifulSoup(page.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "results = parse_info(soup)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And let's add a final step of generalization:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_job_listings(title, location, amount=100):\n",
    "    results = list()\n",
    "    for page in range(amount//10):\n",
    "        site = get_jobs(title, location, page=page)\n",
    "        soup = BeautifulSoup(site.content)\n",
    "        page_results = parse_info(soup)\n",
    "        results += page_results\n",
    "    return results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "r = get_job_listings('python', 'new york', 100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "len(r)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "r[42]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "### Keep Expanding!\n",
    "\n",
    "Currently you are only fetching the title, link and location of the job. Change that to get also get the **company name**. Maybe you also want to know the beginning of the **text blurb** what the job is about? You could also build this script out to follow the links you gathered and fetch the individual job listing details pages for even more information.\n",
    "\n",
    "The sky is the limit, and the more you train, the better you will get at this. :)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}