{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

\n", " \n", " Scraping websites with help of Selenium\n", " \n", "

\n", "

\n", " \n", " Vadim Voskresenskii (slack: Vadimvoskresenskiy)\n", " \n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Today we will study how to work with one very useful and impressive framework which will help us to scrape websites having dynamic data requests. This framework is called **Selenium** and we can efficiently work with it on Python. The idea laying behind Selenium is very simple -- it allows web developers test their applications before launching them. With help of Selenium, they can emulate the work of browser and check how different elements of their application work from the side of a user.\n", "\n", "But, apart from giving web developers possibility to check their applications, Selenium can be useful also for data analysts who want to get data from websites with sophisticated internal strucutres. Probably, you faced such situations when you try to collect data with help of Beatiful Soup in Python or any other package and cannot get it because you need to wait some time until data is uploaded to a website from a server. Unfortunately, your script does not know about this feature of a website and tries to ge it at once. Finally, instead of getting desirable data you get blank list. Also, I suppose, sometimes, data analysts want to collect data from websites where you need firstly put some information into text fields or click some buttons. Certainly, you cannot do such actions with help of Beuatiful Soup. My tutorial will show you how to tackle with such issues with help of Selenium.\n", "\n", "The plan of the workshop is following:\n", "\n", "1. We will know how to install Selenium and will cover briefly main terminology \n", "2. I will introduce you to the case we are going to solve in the framework of current tutorial\n", "2. We will write script with special Selenium functions allowing to interact with a browser\n", "3. We will add very simple code collecting data we need with help of BeatifulSoup\n", "4. We will write final function combining all our previous steps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Installation and main functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we need to know how to launch Selenium. That's very simple!\n", "\n", "With help of pip, you can install selenium.\n", "\n", "`pip install selenium`\n", "\n", "After that, you need to install driver on you computer which will allow you to interact with a browser.\n", "\n", "*My advise*: choose Firefox for work with Selenium. Initially, I started working with Chrome and found that Chrome sometimes cannot find some elements on webpage which definitely exist. At the same time, Firefox had no any issues with finding these elements. I did not check other browsers though.\n", "\n", "Regardless of a browser you selected, the algorithm of working with drivers is very similar. First, you download driver (geckodriver for Mozilla Firefox can be found [here](https://github.com/mozilla/geckodriver/releases)). Then, you set executable file (*geckodriver.exe*) as an environment variable on your computer (on Windows, you need to add the path to executable file to PATH). That's it. Now, you can work with Selenium. \n", "\n", "If we installed everything correctly we can check how Selenium works. For that, let's import needed modules and try to get to the website." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import datetime\n", "import re\n", "import time\n", "\n", "import numpy as np\n", "import pandas as pd\n", "import requests\n", "from bs4 import BeautifulSoup\n", "from dateutil import relativedelta\n", "from selenium import webdriver\n", "from selenium.webdriver.common.keys import Keys" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Firefox()\n", "driver.get(\"https://mlcourse.ai/roadmap\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If everything is fine, you will see how magically Firefox browser opens the webpage of our course.\n", "\n", "Before scraping I offer you to get started with looking at main functions we are going to use in the tutorial.\n", "\n", "Our approach to collect data is very simple. First, we need to find HTML element with which we want to interact and ,second, interact with it by sending keys (browser thinks that a real user presses buttons on her keyboard) or clicking buttons. \n", "\n", "HTML element can be identified by different ways. Here are the most important functions for us:\n", "\n", "`driver.find_element_by_id`\n", "\n", "With help of this function we can find element by it's id. All elements on a webpage have their own unique ids.\n", "\n", "`driver.find_element_by_xpath`\n", "\n", "Xpath is a path to html element we need. Sometimes, elements on one page can have the same paths. So, we need to be very careful with this approach. But in most cases, Xpath is the easiest way how to get very specific element on the webpage.\n", "\n", "`driver.find_element_by_link_text`\n", "\n", "The most dangerous function is searching for element on the base of text (the text you see on a webpage). As you can understand, it can be used only in the case if only one element is represented by this text.\n", "\n", "Ok, we found element. How to interact with it?\n", "\n", "For this, there are some other functions.\n", "\n", "`element.send_keys(\"text\")`\n", "\n", "With help of this function, we can send some text to the website. For instance, we can sign up or write name of a book we want to buy on Amazon.\n", "\n", "`click()`\n", "\n", "If we work with a button, we can click on it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Scraping Airbnb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the current workshop, we will be scraping data from [Airbnb](https://ru.airbnb.com/). Airbnb is the website for travelers which sometimes allows you to find cheaper place for living than websites like Booking.com. On Airbnb you are searching not for hotels or hostels but for apartment offered by hosts living in a city you want to visit. Airbnb is based on principles of sharing economy where trust between hosts and guests is supported by reviews technology.\n", "\n", "Our task is following. Let's imagine that you and your friend want to travel to London and live there from 15th of March of 2018 to 23rd of May (completely random dates). We do not want to go to the website each day in a hope to find the best offer. Instead of it, we want to write function which will be collecting regularly for us offers from hosts, some characteristics of the apartments and their prices. But as you can see Airbnb website is created well and it has a lot of interactive elements: apart from search fields we have calendars, special buttons for choosing number of guests, children. By using only Beautiful Soup we cannot collect all data we need. Thereby, we, certainly, need Selenium. Let's start!\n", "\n", "So, we now on the main page of Airbnb and we need to choose city, country, dates and number of guests. Obviously, we need to start with place we are going to visit. The best way is to put a city and a country into the search field. For identifying html element of the search field we have special function in our browser called \"Inspect element\". To find it, we need to press right button of a mouse on the element we wanna get and click on Q on our keyboard (look at the picture below)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image1\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you click on this button you will get information which HTML element is responsible for this part of the website.\n", "\n", "In our case, the id of this element is following:\n", "\n", "`id=\"Koan-magic-carpet-koan-search-bar__input\"`\n", "\n", "After sending keys Airbnb asks us to choose one of the options. Usually, the first option is that we need.\n", "\n", "For the first option we have this id:\n", "\n", "`id = Koan-magic-carpet-koan-search-bar__option-0`\n", "\n", "And in this case instead of sending keys we need to click button.\n", "\n", "*Hint*: add time.sleep(2) between functions as sometimes we need to wait a bit until all information is uploaded." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Firefox()\n", "driver.get(\n", " \"https://ru.airbnb.com/\"\n", ") # if you are not from Russia, you need to write driver.get(\"https://airbnb.com/\")\n", "driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__input\").send_keys(\n", " \"London, United Kingdom\"\n", ")\n", "time.sleep(2)\n", "driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__option-0\").click()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! We chose city!\n", "\n", "Now we need to do something more challenging. We need to choose dates of our check-in and check-out. Let's start with check-in. Id of the check-in window is called: *checkin_input*. So, we write:\n", "\n", "`driver.find_element_by_id(\"checkin_input\").click()`\n", "\n", "After clicking on this button calendar appears. Interface of Airbnb does not allow us to get around calendar and send keys, so we need to find way to choose desired date. I remind you that our plan is to move in on 15th of March. As you can notice, calendar opens on the current month (in this case, it is December). And also you can notice special buttons which allow us to switch months (look at the picture below). So, we need to click on right button switching months *3 times* to reach March. To click on this button we need to find Xpath following to it. For that, we again inspect element, then click on found element by right button of mouse and choose *Copy -> Xpath*. After that, copy Xpath to the script. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image2\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the line of code:\n", "\n", "`for i in np.arange(0,3):`\n", "\n", "\n", "`driver.find_element_by_xpath(\"//[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\").click()`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we need to choose 15th of March on calendar. Let's look at the Xpath of this date:\n", "\n", "`\"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[3]/td[6]`\n", "\n", "We remember that final goal of this tutorial is to write function which will be automatically work for any date. How can we do that if each date has it's unique Xpath? Very simple! Some of the numbers in Xpath are not meaningless for us. This part of Xpath `tr[3]/td[6]` tells us that needed date is located on the third row and it's fifth day in a week (look at the following picture). \n", "\n", "**If you work with Russian version of Airbnb (as me), take into account that our weeks start from Monday not Sunday, so in our case element would have ending:**\n", "\n", "`tr[3]/td[5]`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image3\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Line of a code is very similar to the previous one:\n", "\n", "`driver.find_element_by_xpath(\"//\n", "[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[3]/td[5]\") # for Russian Airbnb`\n", "\n", "`driver.find_element_by_xpath(\"//\n", "[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[3]/td[6]\") # for English version`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we do the same for the check-out window. But take into account that now month starts from March not from December. It means we need to click only twice on the button which switches months. And 23rd of May is the fourth day on the fourth week (in English version of the website it is fifth day of the fourth week). \n", "\n", "Here's the final code for choosing dates:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Firefox()\n", "driver.get(\"https://ru.airbnb.com/\")\n", "driver.find_element_by_id(\"checkin_input\").click()\n", "time.sleep(2)\n", "for i in np.arange(0, 3):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[3]/td[5]\"\n", ").click()\n", "time.sleep(2)\n", "driver.find_element_by_id(\"checkout_input\").click()\n", "time.sleep(2)\n", "for i in np.arange(0, 2):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[4]/td[4]\"\n", ").click()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Cool! We chose dates.\n", "\n", "You go with your friend. So, we need to increase number of guests. First, click on special button for guests and then click twice on button \"+\" increasing number of adults (look at the picture below). The same you can do for the children if it is needed. Don't forget to click again on the button \"guests\" to remove the list with the options and open the way to button \"Search\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image4\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the code for guests:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Firefox()\n", "driver.get(\"https://ru.airbnb.com/\")\n", "time.sleep(5) # it's better to wait a bit longer\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", ").click()\n", "time.sleep(2)\n", "for i in np.arange(0, 2):\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/div/div/div/div[1]/div/div/div/div[2]/div/div[3]/button\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", ").click()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We chose place, dates and number of guests. Therefore, we can start searching for apartments. For that, we click on button \"Search\", wait until new page is uploaded and then click on option \"show all\" (look at the picture) to move to the apartments. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image5\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the code for moving between pages.\n", "\n", "`driver.find_element_by_xpath(\"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[4]\n", "/div/button\").click()`\n", "\n", "`time.sleep(5)`\n", "\n", "`driver.find_element_by_xpath(\"/html/body/div[4]/div/main/div/div[2]/div/div/div/div/div[3]/div/div/div[2]/button\").click()`\n", "\n", "After we get all options, we scrape the content of the page with help of Beatiful Soup and save all urls of apartments into the predefined list (here's [the great tutorial](https://www.dataquest.io/blog/web-scraping-tutorial-python/) on how to work with Beatiful Soup).\n", "\n", "Here, we move all content of the final page to object soup\n", "\n", "`rooms_london = []`\n", " \n", " `soup=BeautifulSoup(driver.page_source, 'lxml')`\n", " \n", "All urls have identical names, so we apply the same regular expression to find them and append to the list.\n", "\n", "`for a in soup.find_all('a', href=re.compile(\"rooms/[0-9]+\")):`\n", " \n", "` rooms_london.append(a['href'])` \n", "\n", "And now we can go through all pages with apartments and add new urls to our list. To switch between pages we can press buttons (2,3,4 etc) located at the bottom of the page (look at the picture below). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"image6\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best way to click on these buttons is function `driver.find_element_by_link_text`. For instance, to click on page 2 we need to write the following line of code:\n", "\n", "`driver.find_element_by_link_text(\"2\").click()`\n", "\n", "Here's the final code which collects all urls with apartments for the first two pages:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rooms_london = []\n", "driver = webdriver.Firefox()\n", "driver.get(\"https://ru.airbnb.com/\")\n", "driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__input\").send_keys(\n", " \"London, United Kingdom\"\n", ")\n", "time.sleep(2)\n", "driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__option-0\").click()\n", "time.sleep(2)\n", "driver.find_element_by_id(\"checkin_input\").click()\n", "for i in np.arange(0, 3):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[3]/td[5]\"\n", ").click()\n", "time.sleep(2)\n", "driver.find_element_by_id(\"checkout_input\").click()\n", "for i in np.arange(0, 2):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[4]/td[4]\"\n", ").click()\n", "time.sleep(5)\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", ").click()\n", "time.sleep(2)\n", "for i in np.arange(0, 2):\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/div/div/div/div[1]/div/div/div/div[2]/div/div[3]/button\"\n", " ).click()\n", " time.sleep(2)\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", ").click()\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[4]/div/button\"\n", ").click()\n", "time.sleep(5)\n", "driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/div/div[2]/div/div/div/div/div[3]/div/div/div[2]/button\"\n", ").click()\n", "time.sleep(2)\n", "soup = BeautifulSoup(driver.page_source, \"lxml\")\n", "for a in soup.find_all(\"a\", href=re.compile(\"rooms/[0-9]+\")):\n", " rooms_london.append(a[\"href\"])\n", "time.sleep(2)\n", "driver.find_element_by_link_text(\"2\").click()\n", "time.sleep(2)\n", "soup = BeautifulSoup(driver.page_source, \"lxml\")\n", "for a in soup.find_all(\"a\", href=re.compile(\"rooms/[0-9]+\")):\n", " rooms_london.append(a[\"href\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we have 36 unique urls" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len(set(rooms_london))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at one of the urls. Did we collect what we wanted?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "set(rooms_london).pop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yes, it seems so.\n", "\n", "In the next part of the code we will extract data we need from these urls. For this script, we again need Selenium as not all the elements of web pages are uploaded from the server at once and we need to wait a bit.\n", "\n", "Let's do that for 5 rooms. In a loop, we put the url of a room to Selenium driver, then wait until all elements are uploaded, extract title of an offer, details mentioned by a host and price. All these elements are saved into dictionaries which are appended into the predefined list." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "driver = webdriver.Firefox()\n", "rooms_info = []\n", "for i in pd.Series(rooms_london).unique()[1:6]:\n", " url = \"https://ru.airbnb.com\" + i\n", " driver.get(url)\n", " time.sleep(2)\n", " room = {} # create blank dictionary\n", " soup = BeautifulSoup(driver.page_source, \"lxml\")\n", " summary = soup.find(id=\"summary\").get_text()\n", " summary = re.sub(\"[^A-Za-z]\", \" \", summary) # remove all non-English characters\n", " room[\"title\"] = re.sub(\"\\\\s+\", \" \", summary).strip() # remove extra whitespaces\n", " details = soup.find(id=\"details\").get_text()\n", " details = re.sub(\"[^A-Za-z]\", \" \", details).strip()\n", " room[\"details\"] = re.sub(\"translated by Google\", \" \", details).strip()\n", " book = soup.find(id=\"book_it_form\").get_text()\n", " book = re.sub(\"\\s\", \"\", book) # remove whitespaces\n", " room[\"price\"] = re.sub(\n", " \"Итого\", \" \", pd.Series(book).str.extract(\"(Итого€[0-9]{3,5})\")[0][0]\n", " ).strip() # extract final price (Итого == Total)\n", " room[\"url\"] = url\n", " rooms_info.append(room)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can transform our dictionary into dataframe and check results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame.from_dict(rooms_info)\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good!\n", "\n", "Now, we can write final function for all the steps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Writing function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section of the tutorial, we will write function which will be working for any query and give user needed information on apartments. In our function, the following arguments will be included: city, country, check_in date, check-out date, number of guests, number of pages script needs to go through." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def airbnb_scrape(city, country, check_in, check_out, guests, pages):\n", " location = city + \", \" + country # join city and country\n", " cin = datetime.datetime.strptime(\n", " check_in, \"%d-%m-%Y\"\n", " ) # transform string into datetime format\n", " cout = datetime.datetime.strptime(check_out, \"%d-%m-%Y\")\n", " diff_cin_cout = relativedelta.relativedelta(\n", " cout, cin\n", " ).months # difference between check_in and check_out in months\n", " now = datetime.datetime.now() # today's date\n", " diff_cin_now = relativedelta.relativedelta(\n", " cin, now\n", " ).months # difference between check-in and today's date\n", " first_day_in = datetime.datetime.strptime(\n", " \"01\" + check_in[2:], \"%d-%m-%Y\"\n", " ) # first day of month for check_in, we need it to define number of week\n", " first_day_out = datetime.datetime.strptime(\n", " \"01\" + check_out[2:], \"%d-%m-%Y\"\n", " ) # first day for check_out\n", " weekday_in = cin.weekday() + 1 # day of week for check_in\n", " weekday_out = cout.weekday() + 1 # day of week for check_out\n", " week_in = (\n", " cin.isocalendar()[1] - first_day_in.isocalendar()[1]\n", " ) + 1 # define number of week for check_in\n", " week_out = (\n", " cout.isocalendar()[1] - first_day_out.isocalendar()[1]\n", " ) + 1 # define number of week for check_out\n", " # here, we make Xpaths for numbers of weeks\n", " week_day_xp_in = (\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[\"\n", " + str(week_in)\n", " + \"]/td[\"\n", " + str(weekday_in)\n", " + \"]\"\n", " )\n", " week_day_xp_out = (\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[2]/div/div[2]/div/table/tbody/tr[\"\n", " + str(week_out)\n", " + \"]/td[\"\n", " + str(weekday_out)\n", " + \"]\"\n", " )\n", " rooms = []\n", " rooms_info = []\n", " # the following script you already know\n", " # in some cases I added conditional statements depending on the choices of user\n", " driver = webdriver.Firefox()\n", " driver.get(\"https://ru.airbnb.com/\")\n", " driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__input\").send_keys(\n", " location\n", " )\n", " time.sleep(2)\n", " driver.find_element_by_id(\"Koan-magic-carpet-koan-search-bar__option-0\").click()\n", " time.sleep(2)\n", " driver.find_element_by_id(\"checkin_input\").click()\n", " if diff_cin_now > 0:\n", " for i in np.arange(0, diff_cin_now + 1):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", " driver.find_element_by_xpath(week_day_xp_in).click()\n", " else:\n", " driver.find_element_by_xpath(week_day_xp_in).click()\n", " driver.find_element_by_id(\"checkout_input\").click()\n", " if diff_cin_cout > 0:\n", " for i in np.arange(0, diff_cin_cout):\n", " driver.find_element_by_xpath(\n", " \"//*[@id='MagicCarpetSearchBar']/div[2]/div/div/div[2]/div/div/div/div/div/div[2]/div[1]/div[2]\"\n", " ).click()\n", " time.sleep(2)\n", " driver.find_element_by_xpath(week_day_xp_out).click()\n", " else:\n", " driver.find_element_by_xpath(week_day_xp_out).click()\n", " if guests > 1:\n", " time.sleep(5)\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", " ).click()\n", " for i in np.arange(0, guests):\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/div/div/div/div[1]/div/div/div/div[2]/div/div[3]/button\"\n", " ).click()\n", " time.sleep(2)\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[3]/div[2]/button\"\n", " ).click()\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/section/div/div/div[2]/div/div/div/div[1]/div[3]/div/form/div[4]/div/button\"\n", " ).click()\n", " time.sleep(5)\n", " driver.find_element_by_xpath(\n", " \"/html/body/div[4]/div/main/div/div[2]/div/div/div/div/div[3]/div/div/div[2]/button\"\n", " ).click()\n", " time.sleep(2)\n", " soup = BeautifulSoup(driver.page_source, \"lxml\")\n", " for a in soup.find_all(\"a\", href=re.compile(\"rooms/[0-9]+\")):\n", " rooms.append(a[\"href\"])\n", " time.sleep(2)\n", " if pages > 1:\n", " for i in np.arange(2, pages + 1):\n", " driver.find_element_by_link_text(str(i)).click()\n", " time.sleep(2)\n", " soup = BeautifulSoup(driver.page_source, \"lxml\")\n", " for a in soup.find_all(\"a\", href=re.compile(\"rooms/[0-9]+\")):\n", " rooms.append(a[\"href\"])\n", " for i in pd.Series(rooms).unique():\n", " room = {}\n", " url = \"https://ru.airbnb.com\" + i\n", " driver.get(url)\n", " time.sleep(5)\n", " soup = BeautifulSoup(driver.page_source, \"lxml\")\n", " summary = soup.find(id=\"summary\").get_text()\n", " summary = re.sub(\"[^A-Za-z]\", \" \", summary) # remove all non-English characters\n", " room[\"title\"] = re.sub(\"\\\\s+\", \" \", summary).strip() # remove extra whitespaces\n", " details = soup.find(id=\"details\").get_text()\n", " details = re.sub(\"[^A-Za-z]\", \" \", details).strip()\n", " room[\"details\"] = re.sub(\"translated by Google\", \" \", details).strip()\n", " book = soup.find(id=\"book_it_form\").get_text()\n", " book = re.sub(\"\\s\", \"\", book)\n", " room[\"price\"] = re.sub(\n", " \"Итого\", \" \", pd.Series(book).str.extract(\"(Итого€[0-9]{3,5})\")[0][0]\n", " ).strip() # extract final price (Итого == Total)\n", " room[\"url\"] = url\n", " rooms_info.append(room)\n", " df = pd.DataFrame.from_dict(rooms_info)\n", " return df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check how our function work on another query. You and two of your friends go to Berlin from the 2nd of April to 9th of April. You want to go through the first two pages to check offers on Airbnb. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "berlin_df = airbnb_scrape(\"Berlin\", \"Germany\", \"02-04-2019\", \"09-04-2019\", 3, 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As in the previous case, we collected 36 offers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "berlin_df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's our data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "berlin_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at urls. Did we collect what we wanted?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(berlin_df.url[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yes! This url, certainly, satisfies our query!\n", "\n", "So, that's it for this tutorial. I very liked working with Selenium and advice you also to check it's affordances.\n", "\n", "Thanks for your attention!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Further reading" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[All information on how to work with Selenium on python](https://selenium-python.readthedocs.io/)\n", "\n", "[Example of scraping with Beatiful Soup and Selenium](https://medium.freecodecamp.org/better-web-scraping-in-python-with-selenium-beautiful-soup-and-pandas-d6390592e251)\n", "\n", "[Another example](https://medium.freecodecamp.org/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }