{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Prognostics Server (prog_server)\n", "\n", "The ProgPy Server (`prog_server`) is a simplified implementation of a Service-Oriented Architecture (SOA) for performing prognostics (estimation of time until events and future system states) of engineering systems. `prog_server` is a wrapper around the ProgPy package, allowing one or more users to access the features of these packages through a REST API. The package is intended to be used as a research tool to prototype and benchmark Prognostics As-A-Service (PaaS) architectures and work on the challenges facing such architectures, including Generality, Communication, Security, Environmental Complexity, Utility, and Trust.\n", "\n", "The ProgPy Server is actually two packages, `prog_server` and `prog_client`. The `prog_server` package is a prognostics server that provides the REST API. The `prog_client` package is a python client that provides functions to interact with the server via the REST API.\n", "\n", "**TODO(CT): IMAGE- server with clients**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents\n", "* [Installing](#Installing)\n", "* [Starting prog_server](#Starting-prog_server)\n", " * [Command Line](#Command-Line)\n", " * [Programmtically](#Programatically)\n", "* [Using prog_server with prog_client](#Using-prog_server-with-prog_client)\n", " * [Online Prognostics Example](#Online-Prognostics-Example)\n", " * [Option Scoring Example](#Option-scoring-example)\n", "* [Using prog_server with REST Interface](#Using-prog_server-with-REST-Interface)\n", "* [Custom Models](#Custom-Models)\n", "* [Closing prog_server](#Closing-prog_server)\n", "* [Conclusion](#Conclusion)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Installing\n", "\n", "`prog_server` can be installed using pip\n", "\n", "```console\n", "$ pip install prog_server\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Starting prog_server" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`prog_server` can be started through the command line or programatically (i.e., in a python script). Once the server is started, it will take a short time to initialize. Then, it will start receiving requests for sessions from clients using `prog_client`, or interacting directly using the REST interface." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Command Line\n", "Generally, you can start `prog_server` by running the module, like this:\n", "\n", "```console\n", "$ python -m prog_server\n", "```\n", "\n", "Note that you can force the server to start in debug mode using the `debug` flag. For example, `python -m prog_server --debug`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Programatically\n", "There are two methods to start the `prog_server` programatically in python. The first, below, is non-blocking and allows users to perform other functions while the server is running." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import prog_server\n", "\n", "prog_server.start()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When starting a server, users can also provide arguments to customize the way the server runs. Here are the main arguments used:\n", "\n", "* `host` (str): Server host address. Defaults to ‘127.0.0.1’\n", "* `port` (int): Server port address. Defaults to 8555\n", "* `debug` (bool): If the server is to be started in debug mode\n", "\n", "Now `prog_server` is ready to start receiving session requests from users. The server can also be stopped using the `stop()` function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prog_server.stop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`prog_server` can also be started in blocked mode using the following command:\n", "\n", "```python\n", ">>> prog_server.run()\n", "```\n", "\n", "We will not execute it here, because it would block execution in this notebook until we force quit.\n", "\n", "For details on all supported arguments, see the [API Doc](https://nasa.github.io/progpy/api_ref/prog_server/prog_server.html#prog_server.start).\n", "\n", "The basis of `prog_server` is the session. Each user creates one or more session. These sessions are each a request for prognostic services. Then the user can interact with the open session. You'll see examples of this in the future sections.\n", "\n", "Let's restart the server again so it can be used with the below examples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prog_server.start()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using prog_server with prog_client\n", "\n", "For users using python, `prog_server` can be interacted with using the `prog_client` package distributed with ProgPy. This section describes a few examples using `prog_client` and `prog_server` together.\n", "\n", "We will first import the needed package." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import prog_client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Online Prognostics Example\n", "This example creates a session with the server to run prognostics for a Thrown Object, a simplified model of an object thrown into the air. Data is then sent to the server and a prediction is requested. The prediction is then displayed.\n", "\n", "**Note: before running this example, make sure `prog_server` is running.**\n", "\n", "The first step is to open a session with the server. This starts a session for prognostics with the ThrownObject model, with default parameters. The prediction configuration is updated to have a save frequency of every 1 second." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = prog_client.Session(\"ThrownObject\", pred_cfg={\"save_freq\": 1})\n", "print(session) # Printing the Session Information" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you were to re-run the lines above, it would start a new session, with a new number.\n", "\n", "Next, we need to prepare the data we will use for this example. The data is a dictionary, and the keys are the names of the inputs and outputs in the model with format (time, value).\n", "\n", "Note that in an actual application, the data would be received from a sensor or other source. The structure below is used to emulate the sensor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "example_data = [\n", " (0, {\"x\": 1.83}),\n", " (0.1, {\"x\": 5.81}),\n", " (0.2, {\"x\": 9.75}),\n", " (0.3, {\"x\": 13.51}),\n", " (0.4, {\"x\": 17.20}),\n", " (0.5, {\"x\": 20.87}),\n", " (0.6, {\"x\": 24.37}),\n", " (0.7, {\"x\": 27.75}),\n", " (0.8, {\"x\": 31.09}),\n", " (0.9, {\"x\": 34.30}),\n", " (1.0, {\"x\": 37.42}),\n", " (1.1, {\"x\": 40.43}),\n", " (1.2, {\"x\": 43.35}),\n", " (1.3, {\"x\": 46.17}),\n", " (1.4, {\"x\": 48.91}),\n", " (1.5, {\"x\": 51.53}),\n", " (1.6, {\"x\": 54.05}),\n", " (1.7, {\"x\": 56.50}),\n", " (1.8, {\"x\": 58.82}),\n", " (1.9, {\"x\": 61.05}),\n", " (2.0, {\"x\": 63.20}),\n", " (2.1, {\"x\": 65.23}),\n", " (2.2, {\"x\": 67.17}),\n", " (2.3, {\"x\": 69.02}),\n", " (2.4, {\"x\": 70.75}),\n", " (2.5, {\"x\": 72.40}),\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can start sending the data to the server, checking periodically to see if there is a completed prediction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from time import sleep\n", "\n", "LAST_PREDICTION_TIME = None\n", "for i in range(len(example_data)):\n", " # Send data to server\n", " print(f\"{example_data[i][0]}s: Sending data to server... \", end=\"\")\n", " session.send_data(time=example_data[i][0], **example_data[i][1])\n", "\n", " # Check for a prediction result\n", " status = session.get_prediction_status()\n", " if LAST_PREDICTION_TIME != status[\"last prediction\"]:\n", " # New prediction result\n", " LAST_PREDICTION_TIME = status[\"last prediction\"]\n", " print(\"Prediction Completed\")\n", "\n", " # Get prediction\n", " # Prediction is returned as a type uncertain_data, so you can manipulate it like that datatype.\n", " # See https://nasa.github.io/prog_algs/uncertain_data.html\n", " t, prediction = session.get_predicted_toe()\n", " print(f\"Predicted ToE (using state from {t}s): \")\n", " print(prediction.mean)\n", "\n", " # Get Predicted future states\n", " # You can also get the predicted future states of the model.\n", " # States are saved according to the prediction configuration parameter 'save_freq' or 'save_pts'\n", " # In this example we have it setup to save every 1 second.\n", " # Return type is UnweightedSamplesPrediction (since we're using the monte carlo predictor)\n", " # See https://nasa.github.io/prog_algs\n", " t, event_states = session.get_predicted_event_state()\n", " print(f\"Predicted Event States (using state from {t}s): \")\n", " es_means = [\n", " (event_states.times[i], event_states.snapshot(i).mean)\n", " for i in range(len(event_states.times))\n", " ]\n", " for time, es_mean in es_means:\n", " print(f\"\\t{time}s: {es_mean}\")\n", "\n", " # Note: you can also get the predicted future states of the model (see get_predicted_states()) or performance parameters (see get_predicted_performance_metrics())\n", "\n", " else:\n", " print(\"No prediction yet\")\n", " # No updated prediction, send more data and check again later.\n", " sleep(0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the prediction wasn't updated every time step. It takes a bit of time to perform a prediction.\n", "\n", "Note that we can also get the model from `prog_server` to work with directly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = session.get_model()\n", "\n", "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Option Scoring Example\n", "\n", "This example creates a session with the server to run prognostics for a `BatteryCircuit` model. Three options with different loading profiles are compared by creating a session for each option and comparing the resulting prediction metrics.\n", "\n", "First step is to prepare load profiles to compare. Each load profile has format `Array[Dict]`. Where each dict is in format `{TIME: LOAD}`, where `TIME` is the start of that loading in seconds. `LOAD` is a dict with keys corresponding to model.inputs. Note that the dict must be in order of increasing time.\n", "\n", "Here we introduce 3 load profiles to be used with simulation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan0 = {0: {\"i\": 2}, 600: {\"i\": 1}, 900: {\"i\": 4}, 1800: {\"i\": 2}, 3000: {\"i\": 3}}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan1 = {0: {\"i\": 3}, 900: {\"i\": 2}, 1000: {\"i\": 3.5}, 2000: {\"i\": 2.5}, 2300: {\"i\": 3}}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan2 = {\n", " 0: {\"i\": 1.25},\n", " 800: {\"i\": 2},\n", " 1100: {\"i\": 2.5},\n", " 2200: {\"i\": 6},\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "LOAD_PROFILES = [plan0, plan1, plan2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next step is to open a session with the battery circuit model for each of the 3 plans. We are specifying a time of interest of 2000 seconds (for the sake of a demo). This could be the end of a mission/session, or some inspection time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sessions = [\n", " prog_client.Session(\n", " \"BatteryCircuit\",\n", " pred_cfg={\"save_pts\": [2000], \"save_freq\": 1e99, \"n_samples\": 15},\n", " load_est=\"Variable\",\n", " load_est_cfg=LOAD_PROFILES[i],\n", " )\n", " for i in range(len(LOAD_PROFILES))\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's wait for prognostics to be complete." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for session in sessions:\n", " sessions_in_progress = True\n", " while sessions_in_progress:\n", " sessions_in_progress = False\n", " status = session.get_prediction_status()\n", " if status[\"in progress\"] != 0:\n", " print(f\"\\tSession {session.session_id} is still in progress\")\n", " sessions_in_progress = True\n", " sleep(5)\n", " print(f\"\\tSession {session.session_id} complete\")\n", "print(\"All sessions complete\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the sessions are complete, we can get the results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = [session.get_predicted_toe()[1] for session in sessions]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's compare results. Let's look at the mean Time to Event (`ToE`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Mean ToE:\")\n", "best_toe = 0\n", "best_plan = None\n", "for i in range(len(results)):\n", " mean_toe = results[i].mean[\"EOD\"]\n", " print(f\"\\tOption {i}: {mean_toe:0.2f}s\")\n", " if mean_toe > best_toe:\n", " best_toe = mean_toe\n", " best_plan = i\n", "print(f\"Best option using method 1: Option {best_plan}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a second metric, let's look at the `SOC` at our point of interest (2000 seconds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "best_soc = 0\n", "best_plan = None\n", "soc = [session.get_predicted_event_state()[1] for session in sessions]\n", "for i in range(len(soc)):\n", " mean_soc = soc[i].snapshot(-1).mean[\"EOD\"]\n", " print(f\"\\tOption {i}: {mean_soc:0.3f} SOC\")\n", " if mean_soc > best_soc:\n", " best_soc = mean_soc\n", " best_plan = i\n", "print(f\"Best option using method 2: Option {best_plan}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Other metrics can be used as well, like probability of mission success given a certain mission time, uncertainty in `ToE` estimate, final state at end of mission, among others." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using prog_server with REST Interface\n", "\n", "Communication with ProgPy is through a REST interface. The REST API is described here: [prog_server REST API](https://app.swaggerhub.com/apis-docs/teubert/prog_server/).\n", "\n", "Most programming languages have a way of interacting with REST APIs (either native or through a package/library). `curl` requests can also be used by command line or apps like Postman." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom Models\n", "**A version of this section will be added in release v1.9** " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Closing prog_server\n", "When you're done using prog_server, make sure you turn off the server." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prog_server.stop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section, we have demonstrated how to use the ProgPy server, including `prog_server` and `prog_client`. This is the last notebook in the ProgPy tutorial series.\n", "\n", "For more information about ProgPy in general, check out the __[00 Intro](00_Intro.ipynb)__ notebook and [ProgPy documentation](https://nasa.github.io/progpy/index.html)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.11.0 ('env': venv)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.0" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "71ccad9e81d0b15f7bb5ef75e2d2ca570011b457fb5a41421e3ae9c0e4c33dfc" } } }, "nbformat": 4, "nbformat_minor": 2 }