{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Prediction API\n", "\n", "Programmatically use OpenPifPaf to run multi-person pose estimation on an image.\n", "The API is for more advanced use cases. Please read {doc}`predict_cli` as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import io\n", "import numpy as np\n", "import openpifpaf\n", "import PIL\n", "import requests\n", "import torch\n", "\n", "%matplotlib inline\n", "openpifpaf.show.Canvas.show = True\n", "\n", "device = torch.device('cpu')\n", "# device = torch.device('cuda') # if cuda is available\n", "\n", "print(openpifpaf.__version__)\n", "print(torch.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load an Example Image\n", "\n", "Image credit: \"[Learning to surf](https://www.flickr.com/photos/fotologic/6038911779/in/photostream/)\" by fotologic which is licensed under [CC-BY-2.0].\n", "\n", "[CC-BY-2.0]: https://creativecommons.org/licenses/by/2.0/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "image_response = requests.get('https://raw.githubusercontent.com/vita-epfl/openpifpaf/master/docs/coco/000000081988.jpg')\n", "pil_im = PIL.Image.open(io.BytesIO(image_response.content)).convert('RGB')\n", "im = np.asarray(pil_im)\n", "\n", "with openpifpaf.show.image_canvas(im) as ax:\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load a Trained Neural Network" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "net_cpu, _ = openpifpaf.network.Factory(checkpoint='shufflenetv2k16', download_progress=False).factory()\n", "net = net_cpu.to(device)\n", "\n", "openpifpaf.decoder.utils.CifSeeds.threshold = 0.5\n", "openpifpaf.decoder.utils.nms.Keypoints.keypoint_threshold = 0.2\n", "openpifpaf.decoder.utils.nms.Keypoints.instance_threshold = 0.2\n", "processor = openpifpaf.decoder.factory([hn.meta for hn in net_cpu.head_nets])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preprocessing, Dataset\n", "\n", "Specify the image preprocossing. Beyond the default transforms, we also use `CenterPadTight(16)` which adds padding to the image such that both the height and width are multiples of 16 plus 1. With this padding, the feature map covers the entire image. Without it, there would be a gap on the right and bottom of the image that the feature map does not cover." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "preprocess = openpifpaf.transforms.Compose([\n", " openpifpaf.transforms.NormalizeAnnotations(),\n", " openpifpaf.transforms.CenterPadTight(16),\n", " openpifpaf.transforms.EVAL_TRANSFORM,\n", "])\n", "data = openpifpaf.datasets.PilImageList([pil_im], preprocess=preprocess)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataloader, Visualizer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "loader = torch.utils.data.DataLoader(\n", " data, batch_size=1, pin_memory=True, \n", " collate_fn=openpifpaf.datasets.collate_images_anns_meta)\n", "\n", "annotation_painter = openpifpaf.show.AnnotationPainter()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prediction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for images_batch, _, __ in loader:\n", " predictions = processor.batch(net, images_batch, device=device)[0]\n", " with openpifpaf.show.image_canvas(im) as ax:\n", " annotation_painter.annotations(ax, predictions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each prediction in the `predictions` list above is of type `Annotation`. You can access the joint coordinates in the `data` attribute. It is a numpy array that contains the $x$ and $y$ coordinates and the confidence for every joint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictions[0].data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fields\n", "\n", "Below are visualizations of the fields.\n", "When using the API here, the visualization types are individually enabled.\n", "Then, the index for every field to visualize must be specified. In the example below, the fifth CIF (left shoulder) and the fifth CAF (left shoulder to left hip) are activated.\n", "\n", "These plots are also accessible from the command line: use `--debug-indices cif:5 caf:5` to select which joints and connections to visualize." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:confidence'])\n", "\n", "for images_batch, _, __ in loader:\n", " predictions = processor.batch(net, images_batch, device=device)[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:regression'])\n", "\n", "for images_batch, _, __ in loader:\n", " predictions = processor.batch(net, images_batch, device=device)[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the CIF field, a high resolution accumulation (in the code it's called `CifHr`) is generated.\n", "This is also the basis for the seeds. Both are shown below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openpifpaf.visualizer.Base.set_all_indices(['cif:5:hr', 'seeds'])\n", "\n", "for images_batch, _, __ in loader:\n", " predictions = processor.batch(net, images_batch, device=device)[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Starting from a seed, the poses are constructed. At every joint position, an occupancy map marks whether a previous pose was already constructed here. This reduces the number of poses that are constructed from multiple seeds for the same person. The final occupancy map is below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "openpifpaf.visualizer.Base.set_all_indices(['occupancy:5'])\n", "\n", "for images_batch, _, __ in loader:\n", " predictions = processor.batch(net, images_batch, device=device)[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6-final" } }, "nbformat": 4, "nbformat_minor": 2 }