{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "295b7519", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import IPython\n", "\n", "import openpifpaf\n", "openpifpaf.show.Canvas.show = True" ] }, { "cell_type": "markdown", "id": "507ff862", "metadata": {}, "source": [ "# WholeBody\n", "
by Duncan Zauss, 07/05/2021

\n", "This is an extension to OpenPifPaf to detect body, foot, face and hand keypoints, which sum up to 133 keypoints per person. Thus, this plugin is especially useful if fine-grained face, hand or foot keypoints are required. The annotations for these keypoints are taken from the COCO WholeBody dataset." ] }, { "cell_type": "markdown", "id": "9af39c74", "metadata": {}, "source": [ "## Prediction\n", "We provide two pretrained models for predicting the 133 keypoints of the COCO WholeBody dataset. The models can be called with `--checkpoint=shufflenetv2k16-wholebody` or `--checkpoint=shufflenetv2k30-wholebody`. Below an example prediction is shown." ] }, { "cell_type": "code", "execution_count": null, "id": "c48bff6f", "metadata": {}, "outputs": [], "source": [ "%%bash\n", "python -m openpifpaf.predict wholebody/soccer.jpeg \\\n", " --checkpoint=shufflenetv2k30-wholebody --line-width=2 --image-output" ] }, { "cell_type": "code", "execution_count": null, "id": "9cf35743", "metadata": {}, "outputs": [], "source": [ "IPython.display.Image('wholebody/soccer.jpeg.predictions.jpeg')" ] }, { "cell_type": "markdown", "id": "a72d4523", "metadata": {}, "source": [ "Image credit: [Photo](https://de.wikipedia.org/wiki/Kamil_Vacek#/media/Datei:Kamil_Vacek_20200627.jpg) by [Lokomotive74](https://commons.wikimedia.org/wiki/User:Lokomotive74) which is licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)." ] }, { "cell_type": "markdown", "id": "0f0c9980", "metadata": {}, "source": [ "## Visualization of the additional keypoints\n", "Original MS COCO skeleton / COCO WholeBody skeleton" ] }, { "cell_type": "code", "execution_count": null, "id": "d3912ba0", "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# HIDE CODE\n", "\n", "# first make an annotation\n", "ann_coco = openpifpaf.Annotation.from_cif_meta(\n", " openpifpaf.plugins.coco.CocoKp().head_metas[0])\n", "ann_wholebody = openpifpaf.Annotation.from_cif_meta(\n", " openpifpaf.plugins.wholebody.Wholebody().head_metas[0])\n", "\n", "# visualize the annotation\n", "openpifpaf.show.KeypointPainter.show_joint_scales = False\n", "openpifpaf.show.KeypointPainter.line_width = 3\n", "keypoint_painter = openpifpaf.show.KeypointPainter()\n", "with openpifpaf.show.Canvas.annotation(ann_wholebody, ncols=2) as (ax1, ax2):\n", " keypoint_painter.annotation(ax1, ann_coco)\n", " keypoint_painter.annotation(ax2, ann_wholebody)" ] }, { "cell_type": "markdown", "id": "cdba73a6", "metadata": {}, "source": [ "## Train\n", "If you don't want to use the pre-trained model, you can train a model from scratch.\n", "To train you first need to download the wholebody into your MS COCO dataset folder:\n", "```\n", "wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_train2017_wholebody_pifpaf_style.json -O //data-mscoco/annotations\n", "wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_val2017_wholebody_pifpaf_style.json -O //data-mscoco/annotations\n", "```\n", "Note: The pifpaf style annotation files were create with [Get_annotations_from_coco_wholebody.py](https://github.com/openpifpaf/openpifpaf/blob/main/openpifpaf/plugins/wholebody/helper_scripts/Get_annotations_from_coco_wholebody.py). If you want to create your own annotation files from coco wholebody, you need to download the original files from the [COCO WholeBody page](https://github.com/jin-s13/COCO-WholeBody#download) and then create the pifpaf readable json files with [Get_annotations_from_coco_wholebody.py](https://github.com/openpifpaf/openpifpaf/blob/main/openpifpaf/plugins/wholebody/helper_scripts/Get_annotations_from_coco_wholebody.py). This can be useful if you for example only want to use a subset of images for training.\n", "\n", "Finally you can train the model (Note: This can take several days, even on the good GPUs):
\n", "```\n", "python3 -m openpifpaf.train --lr=0.0001 --momentum=0.95 --b-scale=3.0 --epochs=150 --lr-decay 130 140 --lr-decay-epochs=10 --batch-size=16 --weight-decay=1e-5 --dataset=wholebody --wholebody-upsample=2 --basenet=shufflenetv2k16 --loader-workers=16 --wholebody-train-annotations=/data-mscoco/annotations/person_keypoints_train2017_wholebody_pifpaf_style.json --wholebody-val-annotations=/data-mscoco/annotations/person_keypoints_val2017_wholebody_pifpaf_style.json --wholebody-train-image-dir= --wholebody-val-image-dir=\n", "```\n", "\n", "## Evaluation\n", "To evaluate your network you can use the following command. Important note: For evaluation you will need the original [validation annotation file from COCO WholeBody](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view), which has a different format than the files that are used for training. We use this different annotation format as we use the [extended pycocotools](https://github.com/jin-s13/xtcocoapi) for evaluation as proposed by the authors of COCO WholeBody. You can run the evaluation with:
\n", "```\n", "python3 -m openpifpaf.eval --dataset=wholebody --checkpoint=shufflenetv2k16-wholebody --force-complete-pose --seed-threshold=0.2 --loader-workers=16 --wholebody-val-annotations=/coco_wholebody_val_v1.0.json --wholebody-val-image-dir=\n", "```\n", "\n", "## Using only a subset of keypoints\n", "If you only want to train on a subset of keypoints, e.g. if you do not need the facial keypoints and only want to train on the body, foot and hand keypoints, it should be fairly easy to just train on this subset. You will need to:\n", "- Download the original annotation files from the [Coco Whole body page](https://github.com/jin-s13/COCO-WholeBody#download). Create a new annotations file with [Get_annotations_from_coco_wholebody.py](https://github.com/openpifpaf/openpifpaf/blob/main/openpifpaf/plugins/wholebody/helper_scripts/Get_annotations_from_coco_wholebody.py). Set `ann_types`to the keypoints that you would like to use and create the train and val json file. You can use [Visualize_annotations.py](https://github.com/openpifpaf/openpifpaf/blob/main/openpifpaf/plugins/wholebody/helper_scripts/Visualize_annotations.py.py) to verify that the json file was created correctly.\n", "- In the [constants.py](https://github.com/openpifpaf/openpifpaf/blob/main/openpifpaf/plugins/wholebody/constants.py) file comment out all the parts of the skeleton, pose, HFLIP, SIGMA and keypoint names that you do not need. All these constants are already split up in the body parts. The numbering of the joints may now be different (e.g. when you discard the face kpts, but keep the hand kpts), so you need to adjust the numbers in the skeleton definitions to be consisten with the new numbering of the joints.\n", "- That's it! You can train the model with a subset of keypoints.\n", "\n", "\n", "## Links\n", "\n", "[HuggingFace Demo](https://huggingface.co/spaces/akhaliq/Keypoint_Communities), [PapersWithCode](https://paperswithcode.com/paper/keypoint-communities).\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }