"
],
"text/html": [
"Make this Notebook Trusted to load map: File -> Trust Notebook
"
]
},
"metadata": {},
"execution_count": 11
}
]
},
{
"cell_type": "markdown",
"source": [
"# Trip Updates Feed\n",
"The GTFS Realtime Trip Updates feed contains information about real-time updates to scheduled trips, such as delays, changes in stop times, and other dynamic data. But in this case, the Calgary real-time feed only provides Trip ID, Start Time, Start Date, Stop ID, Arrival Time, and Departure Time. The feed design reflects trip delays; for example, a trip scheduled for 8:00 AM with a 10-minute delay will show this updated information. The update will also include this ID if bus 8080 is running the trip."
],
"metadata": {
"id": "ae-UQFeGiKP0"
}
},
{
"cell_type": "code",
"source": [
"import requests\n",
"import gtfs_realtime_pb2 # Import the compiled GTFS Realtime protocol buffer\n",
"import pandas as pd\n",
"from datetime import datetime\n",
"\n",
"def fetch_gtfs_rt_trip_updates(url):\n",
" \"\"\"Fetches and parses the GTFS Realtime Trip Updates feed from the given URL.\"\"\"\n",
" try:\n",
" # Make a GET request to fetch the data from the specified URL\n",
" response = requests.get(url)\n",
" if response.status_code == 200:\n",
" # Parse the response content into a FeedMessage object\n",
" feed = gtfs_realtime_pb2.FeedMessage()\n",
" feed.ParseFromString(response.content)\n",
" return feed\n",
" else:\n",
" # Print an error message if the response status is not OK\n",
" print(f\"Error fetching data: {response.status_code} - {response.reason}\")\n",
" return None\n",
" except Exception as e:\n",
" # Catch and print any exceptions that occur during the request\n",
" print(f\"An error occurred: {e}\")\n",
" return None\n",
"def extract_trip_updates(feed):\n",
" \"\"\"Extracts trip update information from the GTFS Realtime feed.\"\"\"\n",
" trip_updates = []\n",
"\n",
" # Loop through each entity in the feed\n",
" for entity in feed.entity:\n",
" if entity.HasField('trip_update'):\n",
" # Extract the trip update data\n",
" trip_update = entity.trip_update\n",
" trip_id = trip_update.trip.trip_id\n",
" start_time = trip_update.trip.start_time\n",
" start_date = trip_update.trip.start_date\n",
"\n",
" # Loop through each stop time update in the trip update\n",
" for stop_time_update in trip_update.stop_time_update:\n",
" stop_id = stop_time_update.stop_id\n",
" # Extract arrival and departure times, if available\n",
" arrival_time = stop_time_update.arrival.time if stop_time_update.HasField('arrival') else None\n",
" departure_time = stop_time_update.departure.time if stop_time_update.HasField('departure') else None\n",
"\n",
" # Convert timestamps to human-readable format\n",
" arrival_time = datetime.utcfromtimestamp(arrival_time).strftime('%Y-%m-%d %H:%M:%S') if arrival_time else None\n",
" departure_time = datetime.utcfromtimestamp(departure_time).strftime('%Y-%m-%d %H:%M:%S') if departure_time else None\n",
"\n",
" # Add the extracted information to the trip updates list\n",
" trip_updates.append({\n",
" \"Trip ID\": trip_id,\n",
" \"Start Time\": start_time,\n",
" \"Start Date\": start_date,\n",
" \"Stop ID\": stop_id,\n",
" \"Arrival Time\": arrival_time,\n",
" \"Departure Time\": departure_time\n",
" })\n",
" return trip_updates\n",
"# URL for GTFS Realtime Trip Updates\n",
"trip_updates_url = \"https://data.calgary.ca/download/gs4m-mdc2/application%2Foctet-stream\" # Replace with the actual Trip Updates URL\n",
"# Fetch the trip updates feed\n",
"feed = fetch_gtfs_rt_trip_updates(trip_updates_url)\n",
"if feed:\n",
" # Extract the trip updates from the feed\n",
" trip_updates = extract_trip_updates(feed)\n",
"\n",
" # Convert the trip updates into a DataFrame for easy manipulation and display\n",
" df_trip_updates = pd.DataFrame(trip_updates)\n",
"\n",
" # Display the first 10 rows of the DataFrame\n",
" print(df_trip_updates.head(10))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "em0SgaSQiOLY",
"outputId": "b8ade906-5939-4982-9997-3d931b5aabc5"
},
"execution_count": 12,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" Trip ID Start Time Start Date Stop ID Arrival Time \\\n",
"0 69082338 6212 2025-01-17 04:33:00 \n",
"1 69082338 4343 2025-01-17 04:34:00 \n",
"2 69082338 6674 2025-01-17 04:35:00 \n",
"3 69082338 6213 2025-01-17 04:36:00 \n",
"4 69082338 4342 2025-01-17 04:37:00 \n",
"5 69082338 6214 2025-01-17 04:38:00 \n",
"6 69082338 6962 2025-01-17 04:39:00 \n",
"7 69082338 6705 2025-01-17 04:41:00 \n",
"8 69082338 6963 2025-01-17 04:41:00 \n",
"9 69082338 6964 2025-01-17 04:42:00 \n",
"\n",
" Departure Time \n",
"0 2025-01-17 04:33:00 \n",
"1 2025-01-17 04:34:00 \n",
"2 2025-01-17 04:35:00 \n",
"3 2025-01-17 04:36:00 \n",
"4 2025-01-17 04:37:00 \n",
"5 2025-01-17 04:38:00 \n",
"6 2025-01-17 04:39:00 \n",
"7 2025-01-17 04:41:00 \n",
"8 2025-01-17 04:41:00 \n",
"9 2025-01-17 04:42:00 \n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"## Output"
],
"metadata": {
"id": "DC-W9Dguh6a9"
}
},
{
"cell_type": "code",
"source": [
"df_trip_updates.head(10)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 363
},
"id": "h-C5IRotiTAv",
"outputId": "63cd07f1-19a3-40b0-945e-a7130dd927e5"
},
"execution_count": 15,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Trip ID Start Time Start Date Stop ID Arrival Time \\\n",
"0 69082338 6212 2025-01-17 04:33:00 \n",
"1 69082338 4343 2025-01-17 04:34:00 \n",
"2 69082338 6674 2025-01-17 04:35:00 \n",
"3 69082338 6213 2025-01-17 04:36:00 \n",
"4 69082338 4342 2025-01-17 04:37:00 \n",
"5 69082338 6214 2025-01-17 04:38:00 \n",
"6 69082338 6962 2025-01-17 04:39:00 \n",
"7 69082338 6705 2025-01-17 04:41:00 \n",
"8 69082338 6963 2025-01-17 04:41:00 \n",
"9 69082338 6964 2025-01-17 04:42:00 \n",
"\n",
" Departure Time \n",
"0 2025-01-17 04:33:00 \n",
"1 2025-01-17 04:34:00 \n",
"2 2025-01-17 04:35:00 \n",
"3 2025-01-17 04:36:00 \n",
"4 2025-01-17 04:37:00 \n",
"5 2025-01-17 04:38:00 \n",
"6 2025-01-17 04:39:00 \n",
"7 2025-01-17 04:41:00 \n",
"8 2025-01-17 04:41:00 \n",
"9 2025-01-17 04:42:00 "
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Trip ID | \n",
" Start Time | \n",
" Start Date | \n",
" Stop ID | \n",
" Arrival Time | \n",
" Departure Time | \n",
"
\n",
" \n",
" \n",
" \n",
" | 0 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6212 | \n",
" 2025-01-17 04:33:00 | \n",
" 2025-01-17 04:33:00 | \n",
"
\n",
" \n",
" | 1 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 4343 | \n",
" 2025-01-17 04:34:00 | \n",
" 2025-01-17 04:34:00 | \n",
"
\n",
" \n",
" | 2 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6674 | \n",
" 2025-01-17 04:35:00 | \n",
" 2025-01-17 04:35:00 | \n",
"
\n",
" \n",
" | 3 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6213 | \n",
" 2025-01-17 04:36:00 | \n",
" 2025-01-17 04:36:00 | \n",
"
\n",
" \n",
" | 4 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 4342 | \n",
" 2025-01-17 04:37:00 | \n",
" 2025-01-17 04:37:00 | \n",
"
\n",
" \n",
" | 5 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6214 | \n",
" 2025-01-17 04:38:00 | \n",
" 2025-01-17 04:38:00 | \n",
"
\n",
" \n",
" | 6 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6962 | \n",
" 2025-01-17 04:39:00 | \n",
" 2025-01-17 04:39:00 | \n",
"
\n",
" \n",
" | 7 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6705 | \n",
" 2025-01-17 04:41:00 | \n",
" 2025-01-17 04:41:00 | \n",
"
\n",
" \n",
" | 8 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6963 | \n",
" 2025-01-17 04:41:00 | \n",
" 2025-01-17 04:41:00 | \n",
"
\n",
" \n",
" | 9 | \n",
" 69082338 | \n",
" | \n",
" | \n",
" 6964 | \n",
" 2025-01-17 04:42:00 | \n",
" 2025-01-17 04:42:00 | \n",
"
\n",
" \n",
"
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df_trip_updates",
"summary": "{\n \"name\": \"df_trip_updates\",\n \"rows\": 10999,\n \"fields\": [\n {\n \"column\": \"Trip ID\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 554,\n \"samples\": [\n \"69083977\",\n \"69083929\",\n \"69088118\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Start Time\",\n \"properties\": {\n \"dtype\": \"object\",\n \"num_unique_values\": 1,\n \"samples\": [\n \"\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Start Date\",\n \"properties\": {\n \"dtype\": \"object\",\n \"num_unique_values\": 3,\n \"samples\": [\n \"\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Stop ID\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 5198,\n \"samples\": [\n \"4652\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Arrival Time\",\n \"properties\": {\n \"dtype\": \"object\",\n \"num_unique_values\": 105,\n \"samples\": [\n \"2025-01-17 05:14:00\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Departure Time\",\n \"properties\": {\n \"dtype\": \"object\",\n \"num_unique_values\": 102,\n \"samples\": [\n \"2025-01-17 05:14:00\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}"
}
},
"metadata": {},
"execution_count": 15
}
]
},
{
"cell_type": "markdown",
"source": [
"If you’re not receiving start_time and start_date from the GTFS Realtime Trip Updates, it might be because those fields are optional and not always provided in the feed. This might be for security reasons as well. Upon contacting the City’s Transit service, you can get this vital information. I hope you understand the point.\n",
"\n",
"[Calgary Transit Realtime Trip Updates GTFS-RT](https://data.calgary.ca/Transportation-Transit/Calgary-Transit-Realtime-Trip-Updates-GTFS-RT/gs4m-mdc2/about_data?utm_source=chatgpt.com&source=post_page-----49baea30833e--------------------------------)\n",
"\n",
"---\n",
"\n",
"## Side Note\n",
"\n",
"You see how there are different IDs involved, such as Trip and the Stop. Now, if all the data is available, you can easily find the corresponding bus that has been running in those routes. Let me know if you can connect those dots. I would be happy to Colab with you and make this a working project.\n",
"\n",
"---\n",
"\n",
"# Service Alerts\n",
"\n",
"Calgary Transit refreshes its real-time data every half minute. To learn more about the [GTFS-RT specification](https://developers.google.com/transit/gtfs-realtime/) and its [components](https://developers.google.com/transit/gtfs-realtime/guides/feed-entities) ([Trip Updates](https://data.calgary.ca/dataset/GTFS-RT-Trip-Updates/gs4m-mdc2), [Service Alerts](https://data.calgary.ca/Transportation-Transit/Calgary-Transit-Realtime-Service-Alerts-GTFS-RT/jhgn-ynqj/about_data), and [Vehicle Positions](https://data.calgary.ca/dataset/GTFS-RT-Vehicle-Positions/am7c-qe3u)), check out the Google Transit API page. Also, see [Service Updates](http://www.calgarytransit.com/service-updates?nid=170214). Let’s see what the service alerts look like."
],
"metadata": {
"id": "efffCccxidRn"
}
},
{
"cell_type": "code",
"source": [
"import requests # Library to make HTTP requests\n",
"import gtfs_realtime_pb2 # Ensure this proto file is compiled as Python\n",
"import pandas as pd # For handling and displaying data in DataFrame format\n",
"\n",
"# Function to fetch the GTFS Realtime Alerts feed\n",
"def fetch_gtfs_rt_alerts(url):\n",
" \"\"\"Fetches and parses the GTFS Realtime Alerts feed from the given URL.\"\"\"\n",
" try:\n",
" # Send a request to the URL to get the feed\n",
" response = requests.get(url)\n",
"\n",
" # Check if the response is successful (status code 200)\n",
" if response.status_code == 200:\n",
" # Parse the feed using GTFS Realtime protocol\n",
" feed = gtfs_realtime_pb2.FeedMessage()\n",
" feed.ParseFromString(response.content)\n",
" return feed # Return the parsed feed\n",
" else:\n",
" # Print error if the response status is not 200\n",
" print(f\"Error fetching data: {response.status_code} - {response.reason}\")\n",
" return None\n",
" except Exception as e:\n",
" # Catch and print any exception that occurs during the request\n",
" print(f\"An error occurred: {e}\")\n",
" return None\n",
"\n",
"# Function to extract alerts from the GTFS Realtime feed\n",
"def extract_alerts(feed):\n",
" \"\"\"Extracts alert information from the GTFS Realtime feed.\"\"\"\n",
" alerts = [] # Initialize an empty list to store alert information\n",
"\n",
" # Loop through each entity in the feed\n",
" for entity in feed.entity:\n",
" # Check if the entity contains an alert\n",
" if entity.HasField('alert'):\n",
" alert = entity.alert\n",
" # Extract relevant fields from the alert\n",
" alert_id = entity.id # Unique ID for the alert\n",
" # Extract header text from the alert (if available)\n",
" header_text = alert.header_text.translation[0].text if alert.header_text.translation else \"No header\"\n",
" # Extract description text from the alert (if available)\n",
" description_text = alert.description_text.translation[0].text if alert.description_text.translation else \"No description\"\n",
" severity_level = alert.severity_level # Severity level of the alert (e.g., low, medium, high)\n",
"\n",
" # Append the extracted alert information to the alerts list\n",
" alerts.append({\n",
" \"Alert ID\": alert_id,\n",
" \"Header\": header_text,\n",
" \"Description\": description_text,\n",
" \"Severity Level\": severity_level\n",
" })\n",
"\n",
" # Return the list of alerts\n",
" return alerts\n",
"\n",
"# URL for GTFS Realtime Alerts (replace with the actual URL)\n",
"alerts_url = \"https://data.calgary.ca/download/jhgn-ynqj/application%2Foctet-stream\" # Example placeholder URL\n",
"\n",
"# Fetch the alerts feed\n",
"feed = fetch_gtfs_rt_alerts(alerts_url)\n",
"\n",
"# If feed is fetched successfully, extract the alerts\n",
"if feed:\n",
" alerts = extract_alerts(feed)\n",
"\n",
" # Convert the list of alerts into a DataFrame for easier viewing\n",
" df_alerts = pd.DataFrame(alerts)\n",
"\n",
" # Display the first 5 rows of the alerts DataFrame\n",
" print(df_alerts.head(5))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "BmeyPyCEi-lt",
"outputId": "a80e6f35-8225-4e1c-9147-5eed6162b294"
},
"execution_count": 17,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" Alert ID Header Description \\\n",
"0 166173 \\nStarting ... \n",
"1 166174
Starting Th... \n",
"2 166175
\\nStart... \n",
"3 166177
\\nStarting ... \n",
"4 166178
\\nStarting ... \n",
"\n",
" Severity Level \n",
"0 1 \n",
"1 1 \n",
"2 1 \n",
"3 1 \n",
"4 1 \n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"You can read the documentation of the Service Alerts below.\n",
"\n",
"[Calgary Transit Realtime Service Alerts GTFS - RT](https://data.calgary.ca/Transportation-Transit/Calgary-Transit-Realtime-Service-Alerts-GTFS-RT/jhgn-ynqj/about_data?source=post_page-----49baea30833e--------------------------------)\n",
"\n",
"Upon executing the above code snippet, I noticed that the data was not in the readable format. The descriptions were not fully displayed and contained HTML tags. So that’s why I have used BeautifulSoup library to clean this up, you can trim leading and trailing spaces or newlines from the alert text.\n",
"\n"
],
"metadata": {
"id": "u8RJ_PxxjOop"
}
},
{
"cell_type": "code",
"source": [
"from bs4 import BeautifulSoup # Import the BeautifulSoup library to parse and clean HTML\n",
"\n",
"def clean_html(raw_html):\n",
" \"\"\"Removes HTML tags and returns plain text.\"\"\"\n",
" # Create a BeautifulSoup object to parse the HTML content\n",
" soup = BeautifulSoup(raw_html, 'html.parser')\n",
" # Use the get_text() method to extract and return the plain text from the HTML\n",
" return soup.get_text()\n",
"# Loop through each alert and clean the \"Header\" and \"Description\" fields by removing HTML tags\n",
"for alert in alerts:\n",
" # Apply the clean_html function to the \"Header\" field\n",
" alert[\"Header\"] = clean_html(alert[\"Header\"])\n",
" # Apply the clean_html function to the \"Description\" field\n",
" alert[\"Description\"] = clean_html(alert[\"Description\"])\n",
"# Convert the cleaned alerts into a pandas DataFrame for easy viewing and manipulation\n",
"df_alerts = pd.DataFrame(alerts)\n",
"# Display the first 5 rows of the DataFrame to verify the cleaned alerts\n",
"print(df_alerts.head(5))\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "VIEXmsrUjf6c",
"outputId": "bad67a5e-265d-4c24-b644-1e2d2c455c23"
},
"execution_count": 18,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" Alert ID Header Description \\\n",
"0 166173 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"1 166174 Starting Thursday, January 16 at 9 a.m. throug... \n",
"2 166175 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"3 166177 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"4 166178 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"\n",
" Severity Level \n",
"0 1 \n",
"1 1 \n",
"2 1 \n",
"3 1 \n",
"4 1 \n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"# Output\n",
"You might be asking yourself, “Can I associate this with the trip_id and vehicle_id”. The answer is Yes, by incorporating that information into your data processing, you can link the alerts to particular trip_id and vehicle_id. Every alert should be connected to the appropriate trip and vehicle. Once again, it’s a moving piece of the puzzle. Once the data is completely available for the public without encapsulation, this can be possible."
],
"metadata": {
"id": "i6yh1MbCjkL_"
}
},
{
"cell_type": "code",
"source": [
"df_alerts.head(5)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 507
},
"id": "W_5wtfRCjlxa",
"outputId": "4ff0413f-534f-4151-8920-4526d5fdac16"
},
"execution_count": 19,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Alert ID Header Description \\\n",
"0 166173 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"1 166174 Starting Thursday, January 16 at 9 a.m. throug... \n",
"2 166175 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"3 166177 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"4 166178 \\nStarting Thursday, January 16 at 9 a.m. thro... \n",
"\n",
" Severity Level \n",
"0 1 \n",
"1 1 \n",
"2 1 \n",
"3 1 \n",
"4 1 "
],
"text/html": [
"\n",
"
\n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" Alert ID | \n",
" Header | \n",
" Description | \n",
" Severity Level | \n",
"
\n",
" \n",
" \n",
" \n",
" | 0 | \n",
" 166173 | \n",
" | \n",
" \\nStarting Thursday, January 16 at 9 a.m. thro... | \n",
" 1 | \n",
"
\n",
" \n",
" | 1 | \n",
" 166174 | \n",
" | \n",
" Starting Thursday, January 16 at 9 a.m. throug... | \n",
" 1 | \n",
"
\n",
" \n",
" | 2 | \n",
" 166175 | \n",
" | \n",
" \\nStarting Thursday, January 16 at 9 a.m. thro... | \n",
" 1 | \n",
"
\n",
" \n",
" | 3 | \n",
" 166177 | \n",
" | \n",
" \\nStarting Thursday, January 16 at 9 a.m. thro... | \n",
" 1 | \n",
"
\n",
" \n",
" | 4 | \n",
" 166178 | \n",
" | \n",
" \\nStarting Thursday, January 16 at 9 a.m. thro... | \n",
" 1 | \n",
"
\n",
" \n",
"
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df_alerts",
"summary": "{\n \"name\": \"df_alerts\",\n \"rows\": 22,\n \"fields\": [\n {\n \"column\": \"Alert ID\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 22,\n \"samples\": [\n \"166173\",\n \"165820\",\n \"166075\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Header\",\n \"properties\": {\n \"dtype\": \"object\",\n \"num_unique_values\": 1,\n \"samples\": [\n \"\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Description\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 22,\n \"samples\": [\n \"\\nStarting Thursday, January 16 at 9 a.m. through Friday, January 17 at 3 p.m., 1 Street will be closed between 7 - 9 Avenue S.W. for construction.\\u00a0 Route 6 will be detoured.\\nRoute 6 will travel:\\n\\nNorth on 1 Street S.W.\\u00a0\\nEast on 9 Avenue S.W.\\u00a0\\nNorth on Centre Street S.\\nWest on 6 Avenue S.W. to regular route\\n\\n\\tThe following stop will be temporarily closed:\\n\\tSB 1 ST @ 7 AV SW (#5303)\\n\\n\\tPlease use the following stop:\\n \\nNB Centre ST @ 7 AV S (farside) - detour stop\\n\\t \\n\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Severity Level\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 0,\n \"min\": 1,\n \"max\": 1,\n \"num_unique_values\": 1,\n \"samples\": [\n 1\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}"
}
},
"metadata": {},
"execution_count": 19
}
]
},
{
"cell_type": "markdown",
"source": [
"# Conclusion\n",
"You’ve made it to the end! This topic is vast and offers significant opportunities for further exploration and development. With the right approach, you could create a new application to help Calgary’s residents avoid inconveniences during bus travel, especially in extreme weather conditions. If you encounter any issues while executing the code, feel free to reach out. Suggestions are always welcome! I hope you enjoyed reading this article, and I look forward to seeing you next time. Happy coding!"
],
"metadata": {
"id": "Wt0wpn1Pjwop"
}
}
]
}