{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Web scraping with python" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Author:** Ties de Kok ([Personal Website](https://www.tiesdekok.com)) \n", "**Last updated:** June 2020 \n", "**Conda Environment:** `LearnPythonForResearch` \n", "**Python version:** Python 3.7 \n", "**License:** MIT License " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** Some features (like the ToC) will only work if you run it locally, use Binder, or use nbviewer by clicking this link: \n", "https://nbviewer.jupyter.org/github/TiesdeKok/LearnPythonforResearch/blob/master/4_web_scraping.ipynb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# *Introduction*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Depending on the website it can be very easy or very hard to extract the information you need. \n", "\n", "Websites can be classified into roughly two categories:\n", "1. Computer oriented webpage: API (Application Program Interface)\n", "2. Human oriented webpage: regular website\n", "\n", "Option 1 (an API) is designed to be approach programmatically so extracting the data you need is usually easy. However, in many cases you don't have an API available so you might have to resort to scraping the regular website (option 2). \n", "\n", "It is worth noting that option 2 can put a strain on the server of the website. Therefore, only resort to option 2 if there is no API available, and if you decide to scrape the regular website make sure to do so in a way that is as polite as possible!\n", "\n", "**This notebook is structured as follows:**\n", "\n", "1. Using the `requests` package to interact with a website or API\n", "2. Extract data using an API\n", "3. Extract data from a regular website using regular expressions\n", "4. Extract data from a regular website by parsing the HTML\n", "5. Extract data from Javascript heavy websites using Selenium\n", "6. Advanced webscraping using Scrapy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note 1:** In this notebook I will often build upon chapter 11 of 'automate the boring stuff' which is available here: \n", "https://automatetheboringstuff.com/chapter11/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note 2:** In this notebook I focus primarily on extracting information from webpages (i.e. `web scraping`) and very little on programming a bot to automatically traverse the web (i.e. `web crawling`)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note 3:** I recommend reading this blog post on the legality of web scraping/crawling: \n", "https://benbernardblog.com/web-scraping-and-crawling-are-perfectly-legal-right/\n", "\n", "**2019 update** I also recommend to read into the \"HIQ vs. Linkedin Case\": \n", "e.g. https://www.natlawreview.com/article/data-scraping-survives-least-now-key-takeaways-9th-circuit-ruling-hiq-vs-linkedin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# *Table of Contents* <a id='toc'></a>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* [Requests package](#requests) \n", "* [Extract data using an API](#api)\n", "* [Extract data from a regular website using regular expressions](#ws-re) \n", "* [Extract data from a regular website by parsing the HTML](#ws-lxml)\n", "* [Extract data from Javascript heavy websites (Headless browsers / Selenium)](#selenium) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Requests package</span><a id='requests'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the `requests` module. I like the description mentioned in the book 'automate the boring stuff':\n", "> The requests module lets you easily download files from the Web without having to worry about complicated issues such as network errors, connection problems, and data compression. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import requests" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* If you google around on webscraping with Python you will probably also find mentions of the `urllib2` package. I highly recommend to use `requests` as it will make your life a lot easier for most tasks. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Basics of the `requests` package" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `requests` package takes a URL and allows you to interact with the contents. For example:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "res = requests.get('https://automatetheboringstuff.com/files/rj.txt')" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Project Gutenberg EBook of Romeo and Juliet, by William Shakespeare\n", "\n", "This eBook is for the use of anyone anywhere at no cost and with\n", "almost no restrictions whatsoever. You may copy it, give it away or\n", "re-use it under the terms of the Projec\n" ] } ], "source": [ "print(res.text[4:250])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `requests` package is incredibly useful because it deals with a lot of connection related issues automatically. We can for example check whether the webpage returned any errors relatively easily:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "200" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res.status_code " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "404" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "requests.get('https://automatetheboringstuff.com/thisdoesnotexist.txt').status_code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can find a list of most common HTTP Status Codes here: \n", "https://www.smartlabsoftware.com/ref/http-status-codes.htm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Extract data using an API</span><a id='api'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "APIs are designed to be approached and 'read' by computers, whereas regular webpages are designed for humans not computers. \n", "\n", "An API, in a simplified sense, has two characteristics:\n", "1. A request is made using a URL that contains parameters specifying the information requested\n", "2. A response by the server in a machine-readable format. \n", "\n", "The machine-readable formats are usually either:\n", "- JSON\n", "- XML\n", "- (sometimes plain text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Demonstration using an example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say, for the sake of an example, that we are interested in retrieving current and historical Bitcoin prices. \n", "\n", "After a quick Google search we find that this information is available on https://www.coindesk.com/price/.\n", "\n", "We could go about and scrape this webpage directly, but as a responsible web-scraper you look around and notice that coindesk fortunately offers an API that we can use to retrieve the information that we need. The details of the API are here:\n", "\n", "https://www.coindesk.com/api/\n", "\n", "There appear to be two API calls that we are interested in:\n", "\n", "1) We can retrieve the current bitcoin price using: https://api.coindesk.com/v1/bpi/currentprice.json \n", "2) We can retrieve historical bitcoin prices using: https://api.coindesk.com/v1/bpi/historical/close.json\n", "\n", "Clicking on either of these links will show the response of the server. If you click the first link it will look something like this:\n", "\n", "\n", "\n", "Not very readable for humans, but easily processed by a machine!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 1: get the current Bitcoin price" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As discussed above, we can retrieve the current Bitcoin price by \"opening\" the following URL: \n", "https://api.coindesk.com/v1/bpi/currentprice.json\n", "\n", "Using the `requests` library we can easily \"open\" this url and retrieve the response." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "res = requests.get('https://api.coindesk.com/v1/bpi/currentprice.json')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An important observation is that this API returns information in the so-called `JSON` format. \n", "\n", "You can learn more about the JSON format here: https://www.w3schools.com/js/js_json_syntax.asp.\n", "\n", "We could, as before, return this results as plain text:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'{\"time\":{\"updated\":\"Jun 3, 2020 02:14:00 UTC\",\"updatedISO\":\"2020-06-03T02:14:00+00:00\",\"updateduk\":\"Jun 3, 2020 at 03:14 BST\"},\"disclaimer\":\"This data was produced from the CoinDesk Bitcoin Price Index (USD). Non-USD currency data converted using hourly conversion rate from openexchangerates.org\",\"chartName\":\"Bitcoin\",\"bpi\":{\"USD\":{\"code\":\"USD\",\"symbol\":\"$\",\"rate\":\"9,494.8652\",\"description\":\"United States Dollar\",\"rate_float\":9494.8652},\"GBP\":{\"code\":\"GBP\",\"symbol\":\"£\",\"rate\":\"7,558.3400\",\"description\":\"British Pound Sterling\",\"rate_float\":7558.34},\"EUR\":{\"code\":\"EUR\",\"symbol\":\"€\",\"rate\":\"8,500.6484\",\"description\":\"Euro\",\"rate_float\":8500.6484}}}'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_res = res.text\n", "text_res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is, however, not desirable because we want see the prices that we want but we have no way of easily and reliably extract these prices from the string.\n", "\n", "We can, however, achieve this by telling `requests` that the response is in the JSON format:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'time': {'updated': 'Jun 3, 2020 02:14:00 UTC',\n", " 'updatedISO': '2020-06-03T02:14:00+00:00',\n", " 'updateduk': 'Jun 3, 2020 at 03:14 BST'},\n", " 'disclaimer': 'This data was produced from the CoinDesk Bitcoin Price Index (USD). Non-USD currency data converted using hourly conversion rate from openexchangerates.org',\n", " 'chartName': 'Bitcoin',\n", " 'bpi': {'USD': {'code': 'USD',\n", " 'symbol': '$',\n", " 'rate': '9,494.8652',\n", " 'description': 'United States Dollar',\n", " 'rate_float': 9494.8652},\n", " 'GBP': {'code': 'GBP',\n", " 'symbol': '£',\n", " 'rate': '7,558.3400',\n", " 'description': 'British Pound Sterling',\n", " 'rate_float': 7558.34},\n", " 'EUR': {'code': 'EUR',\n", " 'symbol': '€',\n", " 'rate': '8,500.6484',\n", " 'description': 'Euro',\n", " 'rate_float': 8500.6484}}}" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "json_res = res.json()\n", "json_res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All that is left now is to extract the Bitcoin prices. This is now easy because `res.json()` returns a Python dictionary." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'code': 'EUR',\n", " 'symbol': '€',\n", " 'rate': '8,500.6484',\n", " 'description': 'Euro',\n", " 'rate_float': 8500.6484}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "json_res['bpi']['EUR']" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'8,500.6484'" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "json_res['bpi']['EUR']['rate']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 2: write a function to retrieve historical Bitcoin prices" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can retrieve historical Bitcoin prices through the following API URL: \n", "https://api.coindesk.com/v1/bpi/historical/close.json\n", "\n", "Looking at https://www.coindesk.com/api/ tells us that we can pass the following parameters to this URL: \n", "* `index` -> to specify the index\n", "* `currency` -> to specify the currency \n", "* `start` -> to specify the start date of the interval\n", "* `end` -> to specify the end date of the interval \n", "\n", "We are primarily interested in the `start` and `end` parameter.\n", "\n", "As illustrated in the example, if we want to get the prices between 2013-09-01 and 2013-09-05 we would construct our URL as such:\n", "\n", "https://api.coindesk.com/v1/bpi/historical/close.json?start=2013-09-01&end=2013-09-05\n", "\n", "**But how do we do this using Python?**\n", "\n", "Fortunately, the `requests` library makes it very easy to pass parameters to a URL as illustrated below. \n", "For more info, see: http://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "API_endpoint = 'https://api.coindesk.com/v1/bpi/historical/close.json'\n", "payload = {'start' : '2013-09-01', 'end' : '2013-09-05'}" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "res = requests.get(API_endpoint, params=payload)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can print the resulting URL (for manual inspection for example) using `res.url`:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "https://api.coindesk.com/v1/bpi/historical/close.json?start=2013-09-01&end=2013-09-05\n" ] } ], "source": [ "print(res.url)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, the result is in the JSON format so we can easily process it:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'2013-09-01': 128.2597,\n", " '2013-09-02': 127.3648,\n", " '2013-09-03': 127.5915,\n", " '2013-09-04': 120.5738,\n", " '2013-09-05': 120.5333}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bitcoin_2013 = res.json()\n", "bitcoin_2013['bpi']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wrap the above into a function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the example above we hardcode the parameter values (the interval dates), if we want to change the dates we have to manually alter the string values. This is not very convenient, it is easier to wrap everything into a function:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "API_endpoint = 'https://api.coindesk.com/v1/bpi/historical/close.json'\n", "\n", "def get_bitcoin_prices(start_date, end_date, API_endpoint = API_endpoint):\n", " payload = {'start' : start_date, 'end' : end_date}\n", " res = requests.get(API_endpoint, params=payload)\n", " json_res = res.json()\n", " return json_res['bpi']" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'2016-01-01': 434.463,\n", " '2016-01-02': 433.586,\n", " '2016-01-03': 430.361,\n", " '2016-01-04': 433.493,\n", " '2016-01-05': 432.253,\n", " '2016-01-06': 429.464,\n", " '2016-01-07': 458.28,\n", " '2016-01-08': 453.37,\n", " '2016-01-09': 449.143,\n", " '2016-01-10': 448.964}" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_bitcoin_prices('2016-01-01', '2016-01-10')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Extract data from a regular website (i.e. webscraping)</span><a id='webscraping'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to extract information from a regular webpage you first have to: \n", "1. Construct or retrieve the URL\n", "2. Retrieve page returned from URL and put it in memory (usually HTML)\n", "\n", "**From here you have a choice:**\n", " \n", "* Treat the HTML source as text and use regular expression to extract the information.\n", "\n", " *Or* \n", " \n", "* Process the HTML use the native HTML structure to extract information (Using `LXML` or `Requests-HTML`\n", "\n", "I will discuss both methods below. However, **I strongly recommend to go with the second option**. HTML is machine readable by nature, which means that you are better off with parsing the HTML in 95% of the cases compared to trying to write complicated regular expressions. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Extract data from a regular website using regular expressions</span><a id='ws-re'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Regular expressions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Python has a native package to deal with regular expressions, you can import it as such:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "import re" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Demonstration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Reminder:* You usually only want to use regular expressions if you want to do something quick-and-dirty, using LXML is nearly always a better solution!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say our goal is to get the number of abstract views for a particular paper on SSRN: \n", "For example this one: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1968579" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: download the source of the page" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "ssrn_url = r'https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1968579'\n", "page_source = requests.get(ssrn_url, headers={'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36'})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* Some websites will block any visits from a client without a user agent, this is why we add the user agent above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: convert source to a string (i.e. text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* by doing so we essentially ignore the inherent structure of an HTML file, we just treat it as a very large string." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "source_text = page_source.text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3: use a regular expression to extract the number of views" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the Chrome browser we can, for example, right click on the number and select 'inspect' to bring up this screen:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based on this we can construct a regular expression to capture the value that we want. \n", "Note, we have to account for any spaces, tabs, and newlines otherwise the regular expression will not capture what we want, this can be very tricky. \n", "\n", "Once we have identified the appropriate regular expression (it can help to use tools like www.pythex.org) we can use `re.findall()`:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[' 434,321']" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "found_values = re.findall('Abstract Views</div>\\r\\n\\t\\t\\t\\t<div class=\"number\" title=\"\">(.*?)</div>', source_text)\n", "found_values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After cleaning the value up a bit (remove spaces and remove comma) we can convert the value to an integral so that Python handles it as a number:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "434321" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "int(found_values[0].strip().replace(',', ''))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**As you can see, regular expression are rarely convenient for web scraping and if possible should be avoided!**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Extract data from a regular website by parsing the HTML</span><a id='ws-lxml'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** I will show both the higher-level `Requests-HTML` and the lower-level `LXML`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the example above we treat a HTML page as plain-text and ignore the inherent format of HTML. \n", "A better alternative is to utilize the inherent structure of HTML to extract the information that we need. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A quick refresher on HTML from 'automate the boring stuff':\n", "\n", "> In case it’s been a while since you’ve looked at any HTML, here’s a quick overview of the basics. An HTML file is a plaintext file with the .html file extension. The text in these files is surrounded by tags, which are words enclosed in angle brackets. The tags tell the browser how to format the web page. A starting tag and closing tag can enclose some text to form an element. The text (or inner HTML) is the content between the starting and closing tags. For example, the following HTML will display Hello world! in the browser, with Hello in bold:\n", "\n", " <strong>Hello</strong> world!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can view the HTML source by right-clicking a page and selecting `view page source`:\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Demonstration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Request-HTML** \n", "\n", " `Requests-HTML` is a convenient library that extends the functionality of `requests` by allowing HTML parsing. \n", "\n", "You can find the documentation here: https://github.com/kennethreitz/requests-html)\n", "\n", "\n", "**LXML**\n", "\n", "`LXML` is a powerfull XML parser that is used as a parser by many packages. However, you can also use it directly in combination with the `requests` package. \n", " \n", "You can find the documentation for `LXML` here: http://lxml.de/\n", "\n", "*Note:* an alternative to LXML is Beautifulsoup but nowadays (in my experience) it is better to use LXML.\n", "\n", "---\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import requests_html\n", "import lxml.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a session object for `requests_html`:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "session = requests_html.HTMLSession()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say we want to extract information (title, description, speakers) about talks from the jupytercon conference. \n", "\n", "We have identified that this information is available on this URL: \n", "https://conferences.oreilly.com/jupyter/jup-ny/public/schedule/proceedings\n", "\n", "**NOTE: I would normally not recommend scraping these types of websites. However, JupyterCon is awesome so I my hope is that you encounter some interesting talks while looking through the proceedings! :)**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Using `Requests-HTML`:</span>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1 + Part 2: Load the source from the URL + parse HTML" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "JC_URL = 'https://conferences.oreilly.com/jupyter/jup-ny/public/schedule/proceedings'\n", "res = session.get(JC_URL)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "<class 'requests_html.HTMLResponse'>\n" ] } ], "source": [ "print(type(res))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: as the names implies `requests-html` combines `requests` with the HTML parser (so we don't need to use `requests` first)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Using `Requests` + `LXML`:</span>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1: Load the source from the URL" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "JC_URL = 'https://conferences.oreilly.com/jupyter/jup-ny/public/schedule/proceedings'\n", "jc_source = requests.get(JC_URL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2: Process the result into an LXML object" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "tree = lxml.html.fromstring(jc_source.text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function `lxml.html.fromstring(res.text)` converts the raw HTML (i.e. the string representation) and converts it into an `HtmlElement` that we can structurally search:" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "lxml.html.HtmlElement" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(tree)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 3: extract the information from the HTML structure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The beauty of an `HtmlElement` is that we can use the structure of the HTML document to our advantage to extract specifics parts of the website. \n", "\n", "There are two ways to go about this: \n", "1. Using a `css selector`\n", "2. Using an `XPath`\n", "\n", "I recommend to only use `css selectors` as they tend increasingly tend to be the superior option in near all cases. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### *What is a `css selector`?*\n", "\n", "CSS is a language that is used to define the style of an HTML document. \n", "It does this by attaching some piece of styling (e.g. \"make text bold\") to a particular HTML object. \n", "This attaching is achieved by defining patterns that select the appropriate HTML elements: these patterns are called `CSS selectors`.\n", "\n", "To illustrate, let's say that we have this piece of HTML:\n", "\n", " <html>\n", " <body>\n", "\n", " <h1>Python is great!</h1>\n", "\n", " </body>\n", " </html>\n", "\n", "We can change the color of the title text to blue through this piece of CSS code:\n", "\n", " h1 {\n", " color: Blue;\n", " }\n", "\n", "The `h1` is the `css selector` and it essentially tells the browser that everything between `<h1> </h1>` should have `color: Blue`.\n", "\n", "Now, the cool thing is that we can also use these `css selectors` to select the HTML elements that we want to extract! " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### *Syntax of a `css selector`*\n", "\n", "Below are the most frequent ways to select a particular HTML element:\n", "\n", "1. Use a dot to select HTML elements based on the **class**: `.classname`\n", "2. Use a hash symbol (#) to select HTML elements based on the **id**: `#idname`\n", "3. Directly put the name of an element to select HTML elements based on the **element**: `p`, `span`, `h1` \n", "\n", "You can also chain multiple conditions together using `>`, `+`, and `~`. \n", "If we want to get all `<p>` elements with a `<div>` parent we can do `div > p` for example.\n", "\n", "For a full overview I recommend checking this page: \n", "https://www.w3schools.com/cssref/css_selectors.asp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### *A pragmatic way to generate the right `css selector`*\n", "\n", "If you are unfamiliar with programming websites then it might be hard to wrap your head around CSS selectors. \n", "Fortunately, there are tools out there that can make it very easy to generate the css selector that you need! \n", "\n", "***Option 1:*** \n", "\n", "If you want just one element you can use the build-in Chrome DevTools (Firefox has something similar). \n", "You achieve this by right clicking on the element you want and then click `\"inspect\"`, this should bring up the Dev console. \n", "\n", "If you then right click on the element you want to extract, you can have DevTools generate a `css selector`:\n", "\n", "<img src=\"https://i.imgur.com/A4BZWL8.png\" width=\"50%\" height=\"50%\" />\n", "\n", "\n", "This will result in the following `css selector`:\n", "\n", "`#en_proceedings > div:nth-child(1) > div.en_session_title > a`\n", "\n", "***Option 2:***\n", "\n", "The above can be limiting if you want to select multiple elements. \n", "An other option that makes this easier is to use an awesome Chrome extension called `SelectorGadget`. \n", "\n", "You can install it here: \n", "https://chrome.google.com/webstore/detail/selectorgadget/mhjhnkcfbdhnjickkkdbjoemdmbfginb\n", "\n", "\n", "There is more information available here as well: \n", "http://selectorgadget.com/\n", "\n", "With this extension you can simply highlight what do / do not want to select and it will generate the `css selector` that you need. For example, if we want all the titles:\n", "\n", "<img src=\"https://i.imgur.com/iq4X335.png\" width=\"60%\" height=\"60%\" />\n", "\n", "\n", "This yields the following `css selector`: \n", "\n", "`'.en_session_title a'`\n", "\n", "\n", "*Note:* The number between brackets after 'Clear' indicates the number of elements selected." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">CSS Selectors with `Requests-HTML`:</span>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generate a list of all titles" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [], "source": [ "title_elements = res.html.find('.en_session_title a')" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "48" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(title_elements)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Get text of first element:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Containerizing notebooks for serverless execution (sponsored by AWS)'" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_elements[0].text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* if you are only interested in the first (or only) object you can add `first=True` to `res.html.find()` and it will only return one result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Get text of all elements:" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['Containerizing notebooks for serverless execution (sponsored by AWS)',\n", " 'Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks',\n", " 'All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations']" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[element.text for element in title_elements][:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extract the hyperlink that leads to the talk page" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above we extract the text, but we can also add `.attrs` to access any attributes of the element:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'href': '/jupyter/jup-ny/public/schedule/detail/71980'}" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_elements[0].attrs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, there is a `href` attribute with the url. \n", "So we can create a list with both the text and the url:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": [ "talks = []\n", "for element in title_elements:\n", " talks.append((element.text, \n", " element.attrs['href']))" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('Containerizing notebooks for serverless execution (sponsored by AWS)',\n", " '/jupyter/jup-ny/public/schedule/detail/71980'),\n", " ('Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks',\n", " '/jupyter/jup-ny/public/schedule/detail/68407'),\n", " ('All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations',\n", " '/jupyter/jup-ny/public/schedule/detail/71345')]" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "talks[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extract the title, hyperlink, description, and authors for each talk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the above approach and do also get a list of all the authors and the descriptions. \n", "It, however, becomes a little bit tricky to combine everything given that one talk might have multiple authors. \n", "\n", "To deal with this (common) problem it is best to loop over each talk element separately and only then extract the information for that talk, that way it is easy to keep everything linked to a specific talk. \n", "\n", "If we look in the Chrome DevTools element viewer, we can observe that each talk is a separate `<div>` with the `en_session` class:\n", "\n", "<img src=\"https://i.imgur.com/tuMdJV4.png\" width=\"30%\" height=\"30%\" />" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first select all the `divs` with the `en_session` class that have a parent with `en_proceedings` as id:" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[<Element 'div' class=('en_session', 'en_clearfix')>,\n", " <Element 'div' class=('en_session', 'en_clearfix')>,\n", " <Element 'div' class=('en_session', 'en_clearfix')>]" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "talk_elements = res.html.find('#en_proceedings > .en_session')\n", "talk_elements[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can loop over each of these elements and extract the information we want:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "talk_details = []\n", "for talk in talk_elements:\n", " title = talk.find('.en_session_title a', first=True).text\n", " href = talk.find('.en_session_title a', first=True).attrs['href']\n", " description = talk.find('.en_session_description', first=True).text.strip()\n", " speakers = [speaker.text for speaker in talk.find('.speaker_names > a')]\n", " talk_details.append((title, href, description, speakers))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the sake of the example, below a prettified inspection of the data we gathered:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The title is: Containerizing notebooks for serverless execution (sponsored by AWS)\n", "Speakers: ['Kevin McCormick', 'Vladimir Zhukov'] \n", "\n", "Description: \n", " Kevin McCormick explains the story of two approaches which were used internally at AWS to accelerate new ML algorithm development, and easily package Jupyter notebooks for scheduled execution, by creating custom Jupyter kernels that automatically create Docker containers, and dispatch them to either a distributed training service or job execution environment. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/71980\n", "---------------------------------------------------------------------------------------------------- \n", "\n", "The title is: Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks\n", "Speakers: ['Matt Brems'] \n", "\n", "Description: \n", " Missing data plagues nearly every data science problem. Often, people just drop or ignore missing data. However, this usually ends up with bad results. Matt Brems explains how bad dropping or ignoring missing data can be and teaches you how to handle missing data the right way by leveraging Jupyter notebooks to properly reweight or impute your data. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/68407\n", "---------------------------------------------------------------------------------------------------- \n", "\n", "The title is: All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations\n", "Speakers: ['Will M Farr'] \n", "\n", "Description: \n", " Will Farr shares examples of Jupyter use within the LIGO and Virgo Scientific Collaborations and offers lessons about the (many) advantages and (few) disadvantages of Jupyter for large, global scientific collaborations. Along the way, Will speculates on Jupyter's future role in gravitational wave astronomy. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/71345\n", "---------------------------------------------------------------------------------------------------- \n", "\n" ] } ], "source": [ "for title, href, description, speakers in talk_details[:3]:\n", " print('The title is: ', title)\n", " print('Speakers: ', speakers, '\\n')\n", " print('Description: \\n', description, '\\n')\n", " print('For details see: ', 'https://conferences.oreilly.com/' + href)\n", " print('-'*100, '\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">CSS Selectors with `LXML`:</span>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** In order to use css selectors with LXML you might have to install `cssselect` by running this in your command prompt: \n", "`pip install cssselect`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generate a list of all titles:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the css selector that we generated earlier with the SelectorGadget extension:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [], "source": [ "title_elements = tree.cssselect('.en_session_title a')" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "48" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(title_elements)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we select the first title element we see that it doesn't return the text:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<Element a at 0x1e034bf23b8>" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_elements[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to extract the text we have to add `.text` to the end:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "' Containerizing notebooks for serverless execution (sponsored by AWS)'" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_elements[0].text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do this for all titles to get a list with all the title texts:" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[' Containerizing notebooks for serverless execution (sponsored by AWS)',\n", " 'Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks',\n", " 'All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations']" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_texts = [x.text for x in title_elements]\n", "title_texts[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extract the hyperlink that leads to the talk page" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above we extract the text, but we can also add `.attrib` to access any attributes of the element:" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'href': '/jupyter/jup-ny/public/schedule/detail/71980'}" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "title_elements[0].attrib" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, there is a `href` attribute with the url. \n", "So we can create a list with both the text and the url:" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [], "source": [ "talks = []\n", "for element in title_elements:\n", " talks.append((element.text, \n", " element.attrib['href']))" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(' Containerizing notebooks for serverless execution (sponsored by AWS)',\n", " '/jupyter/jup-ny/public/schedule/detail/71980'),\n", " ('Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks',\n", " '/jupyter/jup-ny/public/schedule/detail/68407'),\n", " ('All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations',\n", " '/jupyter/jup-ny/public/schedule/detail/71345')]" ] }, "execution_count": 64, "metadata": {}, "output_type": "execute_result" } ], "source": [ "talks[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extract the title, hyperlink, description, and authors for each talk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the above approach and do also get a list of all the authors and the descriptions. \n", "It, however, becomes a little bit tricky to combine everything given that one talk might have multiple authors. \n", "\n", "To deal with this (common) problem it is best to loop over each talk element separately and only then extract the information for that talk, that way it is easy to keep everything linked to a specific talk. \n", "\n", "If we look in the Chrome DevTools element viewer, we can observe that each talk is a separate `<div>` with the `en_session` class:\n", "\n", "<img src=\"https://i.imgur.com/tuMdJV4.png\" width=\"30%\" height=\"30%\" />" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first select all the `divs` with the `en_session` class that have a parent with `en_proceedings` as id:" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[<Element div at 0x1e037dfbe58>,\n", " <Element div at 0x1e037dfbea8>,\n", " <Element div at 0x1e037de8db8>]" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "talk_elements = tree.cssselect('#en_proceedings > .en_session')\n", "talk_elements[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can loop over each of these elements and extract the information we want:" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [], "source": [ "talk_details = []\n", "for talk in talk_elements:\n", " title = talk.cssselect('.en_session_title a')[0].text\n", " href = talk.cssselect('.en_session_title a')[0].attrib['href']\n", " description = talk.cssselect('.en_session_description')[0].text.strip()\n", " speakers = [speaker.text for speaker in talk.cssselect('.speaker_names > a')]\n", " talk_details.append((title, href, description, speakers))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the sake of the example, below a prettified inspection of the data we gathered:" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The title is: Containerizing notebooks for serverless execution (sponsored by AWS)\n", "Speakers: ['Kevin McCormick', 'Vladimir Zhukov'] \n", "\n", "Description: \n", " Kevin McCormick explains the story of two approaches which were used internally at AWS to accelerate new ML algorithm development, and easily package Jupyter notebooks for scheduled execution, by creating custom Jupyter kernels that automatically create Docker containers, and dispatch them to either a distributed training service or job execution environment. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/71980\n", "---------------------------------------------------------------------------------------------------- \n", "\n", "The title is: Advanced data science, part 2: Five ways to handle missing data in Jupyter notebooks\n", "Speakers: ['Matt Brems'] \n", "\n", "Description: \n", " Missing data plagues nearly every data science problem. Often, people just drop or ignore missing data. However, this usually ends up with bad results. Matt Brems explains how bad dropping or ignoring missing data can be and teaches you how to handle missing data the right way by leveraging Jupyter notebooks to properly reweight or impute your data. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/68407\n", "---------------------------------------------------------------------------------------------------- \n", "\n", "The title is: All the cool kids are doing it; maybe we should too? Jupyter, gravitational waves, and the LIGO and Virgo Scientific Collaborations\n", "Speakers: ['Will M Farr'] \n", "\n", "Description: \n", " Will Farr shares examples of Jupyter use within the LIGO and Virgo Scientific Collaborations and offers lessons about the (many) advantages and (few) disadvantages of Jupyter for large, global scientific collaborations. Along the way, Will speculates on Jupyter's future role in gravitational wave astronomy. \n", "\n", "For details see: https://conferences.oreilly.com//jupyter/jup-ny/public/schedule/detail/71345\n", "---------------------------------------------------------------------------------------------------- \n", "\n" ] } ], "source": [ "for title, href, description, speakers in talk_details[:3]:\n", " print('The title is: ', title)\n", " print('Speakers: ', speakers, '\\n')\n", " print('Description: \\n', description, '\\n')\n", " print('For details see: ', 'https://conferences.oreilly.com/' + href)\n", " print('-'*100, '\\n')\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Extract data from Javascript heavy websites (Headless browsers / Selenium)</span><a id='selenium'></a> [(to top)](#toc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A lot of websites nowadays use Javascript elements that are difficult (or impossible) to crawl using `requests`.\n", "\n", "In these scenarios we can use an alternative method where we have Python interact with a browser that is capable of handling Javascript elements. \n", "\n", "There are essentially two ways to do this:\n", "\n", "1. Use a so-called `headless automated browsing` package that runs in the background (you don't see the browser).\n", "2. Use the `Selenium Webdriver` to control a browser like Chrome (you do see the browser)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Headless automated browsing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The goal of headless browser automation is to interact with a browser that is in the background (i.e. has no user interface). \n", "They essentially render a website the same way a normal browser would, but they are more lightweight due to not having to spend resources on the user interface. \n", "\n", "There are many packages available: https://github.com/dhamaniasad/HeadlessBrowsers \n", "\n", "**The easiest solution is to use the `requests-html` package with `r.html.render()`, see here: [requests-html: javascript support](https://github.com/kennethreitz/requests-html#javascript-support)**\n", "\n", "Alternatives:\n", "\n", "1. Ghost.py (http://jeanphix.me/Ghost.py/)\n", "2. Dryscrape (https://dryscrape.readthedocs.io/en/latest/)\n", "3. Splinter (http://splinter.readthedocs.io/en/latest/index.html?highlight=headless)\n", "\n", "Setting up headless browsers can be tricky and they can also be hard to debug (given that they run in the background)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Example using `requests-html`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* if you get an error you might have to run `pyppeteer-install` in your terminal to install Chromium ." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import requests_html" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Financial Accounting\n", "Management Accounting\n", "Computer Science\n", "Data Engineering\n" ] } ], "source": [ "asession = requests_html.AsyncHTMLSession()\n", "URL = 'https://www.tiesdekok.com'\n", "r = await asession.get(URL)\n", "await r.html.arender()\n", "for element in r.html.find('.ul-interests > li'):\n", " print(element.text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Selenium" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `Selenium WebDriver` allows to control a browser, this essentially automates / simulates a normal user interacting with the browser. \n", "One of the most common ways to use the `Selenium WebDriver` is through the Python language bindings. \n", "\n", "Combining `Selenium` with Python makes it very easy to automate web browser interaction, allowing you to scrape essentially every webpage imaginable!\n", "\n", "**Note: if you can use `requests` + `LXML` then this is always preferred as it is much faster compared to using Selenium.**\n", "\n", "The package page for the Selenium Python bindings is here: https://pypi.python.org/pypi/selenium\n", "\n", "If you run below it will install both `selenium` and the `selenium Python bindings`:\n", "> pip install selenium\n", "\n", "You will also need to install a driver to interface with a browser of your preference, I personally use the `ChromeDriver` to interact with the Chrome browser: \n", "https://sites.google.com/a/chromium.org/chromedriver/downloads" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quick demonstration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set up selenium" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "import selenium, os\n", "from selenium import webdriver" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Often `selenium` cannot automatically find the `ChromeDriver` so it helps to find the location it is installed and point `selenium` to it. \n", "In my case it is here:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "CHROME = r\"C:\\chromedriver83.exe\"\n", "os.environ [\"webdriver.chrome.driver\" ] = CHROME" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Start a selenium session" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Chrome(CHROME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After executing `driver = webdriver.Chrome(CHROME)` you should see a chrome window pop-up, this is the window that you can control with Python!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load a page" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say we want to extract something from the Yahoo Finance page for Tesla (TSLA): \n", "https://finance.yahoo.com/quote/TSLA/" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "Tesla_URL = r'https://finance.yahoo.com/quote/TSLA/'" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "driver.get(Tesla_URL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you open the Chrome window you should see that it now loaded the URL we gave it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Navigate" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can select an element multiple ways (most frequent ones):\n", "\n", "> driver.find_element_by_name() \n", "> driver.find_element_by_id() \n", "> driver.find_element_by_class_name() \n", "> driver.find_element_by_css_selector() \n", "> driver.find_element_by_tag_name() \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say we want to extract some values from the \"earnings\" interactive figure on the right side:\n", "\n", "<img src=\"https://i.imgur.com/LLmg0fg.png\" width=\"20%\" height=\"20%\" />" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This would be near-impossible using `requests` as it would simply not load the element, it only loads in an actual browser. \n", "\n", "We could extract this data in two ways:\n", "\n", "1. Programming Selenium to mouse-over the element we want, and use CSS selectors to extract the values from the mouse-over window.\n", "2. Use the console to interact with the underlying Javascript data directly.\n", "\n", "The second method is far more convenient than the first so I will demonstrate that:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Retrieve data from Javascript directly\n", "We can use a neat trick to find out which Javascript variable holds a certain value that we are looking for: \n", "https://stackoverflow.com/questions/26796873/find-which-variable-holds-a-value-using-chrome-devtools\n", "\n", "After pasting the provided function into the dev console we can run `globalSearch(App, '-1.82')` in the Chrome Dev Console to get:\n", "\n", "> App.main.context.dispatcher.stores.QuoteSummaryStore.earnings.earningsChart.quarterly[3].estimate.fmt\n", "\n", "This is all the information that we need to extract all the data points:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "script = 'App.main.context.dispatcher.stores.QuoteSummaryStore.earnings.earningsChart.quarterly'" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "quarterly_values = driver.execute_script('return {}'.format(script))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note:* I add `return` in the beginning to get a JSON response. " ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'actual': {'fmt': '-1.12', 'raw': -1.12},\n", " 'date': '2Q2019',\n", " 'estimate': {'fmt': '-0.36', 'raw': -0.36}},\n", " {'actual': {'fmt': '1.86', 'raw': 1.86},\n", " 'date': '3Q2019',\n", " 'estimate': {'fmt': '-0.42', 'raw': -0.42}},\n", " {'actual': {'fmt': '2.06', 'raw': 2.06},\n", " 'date': '4Q2019',\n", " 'estimate': {'fmt': '1.72', 'raw': 1.72}},\n", " {'actual': {'fmt': '1.14', 'raw': 1.14},\n", " 'date': '1Q2020',\n", " 'estimate': {'fmt': '-0.25', 'raw': -0.25}}]" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "quarterly_values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using `driver.execute_script()` is essentially the programmatical way of executing it in the dev console: \n", "\n", "\n", "<img src=\"https://i.imgur.com/LFtL59W.png\" width=\"40%\" height=\"40%\" />" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are not familiar with Javascript and programming for the web then this might be very hard to wrap you head around, but if you are serious about web-scraping these kinds of tricks can save you days of work. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Close driver" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "driver.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## <span style=\"text-decoration: underline;\">Web crawling with Scrapy</span>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the examples above we always provide the URL directly. \n", "We could program a loop (with any of the above methods) that takes a URL from the page and then goes to that page and extracts another URL, etc. \n", "\n", "This tends to get confusing pretty fast, if you really want to create a crawler you might be better of to look into the `scrapy` package. \n", "\n", "`Scrapy` allows you to create a `spider` that basically 'walks' through webpages and crawls the information. \n", "\n", "In my experience you don't need this for 95% of our use-cases, but feel free to try it out: http://scrapy.org/" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }