{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from fastai.vision import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to use the [Pascal dataset](http://host.robots.ox.ac.uk/pascal/VOC/) for object detection. There is a version from 2007 and a bigger version from 2012. We'll use the 2007 version here. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.PASCAL_2007)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The annotations for the images are stored in json files that give the bounding boxes for each class." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "annots = json.load(open(path/'train.json'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "annots.keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "annots['annotations'][0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This first annotation is a bounding box on the image with id 12, and the corresponding object is the category with id 7. We can read the correspondance in the 'images' and the 'categories' keys." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "annots['categories']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is a convenience method in fastai to extract all the annotations and map them with the right images/categories directly, as long as they are in the format we just saw (called the COCO format). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_images, train_lbl_bbox = get_annotations(path/'train.json')\n", "val_images, val_lbl_bbox = get_annotations(path/'valid.json')\n", "#tst_images, tst_lbl_bbox = get_annotations(path/'test.json')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we will directly find the same image as before at the beginning of the training set, with the corresponding bounding box and category." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_images[0], train_lbl_bbox[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see it, we open the image properly and we create an `ImageBBox` object from the list of bounding boxes. This will allow us to apply data augmentation to our bounding box. To create an `ImageBBox`, we need to give it the height and the width of the original picture, the list of bounding boxes, the list of category ids and the classes list (to map an id to a class).\n", "\n", "Here we don't have a class dictionary available (that will be done automatically behind the scenes with the data block API), so we just pass id 0 and `classes=['car']`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = open_image(path/'train'/train_images[0])\n", "bbox = ImageBBox.create(*img.size, train_lbl_bbox[0][0], [0], classes=['car'])\n", "img.show(figsize=(6,4), y=bbox)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This works with one or several bounding boxes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_images[1], train_lbl_bbox[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = open_image(path/'train'/train_images[1])\n", "bbox = ImageBBox.create(*img.size, train_lbl_bbox[1][0], [0, 1], classes=['person', 'horse'])\n", "img.show(figsize=(6,4), y=bbox)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And if we apply a transform to our image and the `ImageBBox` object, they stay aligned:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = img.rotate(-10)\n", "bbox = bbox.rotate(-10)\n", "img.show(figsize=(6,4), y=bbox)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We group all the image filenames and annotations together, to use the data block API to load the dataset in a `DataBunch`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "images, lbl_bbox = train_images+val_images,train_lbl_bbox+val_lbl_bbox\n", "img2bbox = dict(zip(images, lbl_bbox))\n", "get_y_func = lambda o:img2bbox[o.name]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_data(bs, size):\n", " src = ObjectItemList.from_folder(path/'train')\n", " src = src.split_by_files(val_images)\n", " src = src.label_from_func(get_y_func)\n", " src = src.transform(get_transforms(), size=size, tfm_y=True)\n", " return src.databunch(path=path, bs=bs, collate_fn=bb_pad_collate)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = get_data(64,128)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.show_batch(rows=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The architecture we will use is a [RetinaNet](https://arxiv.org/abs/1708.02002), which is based on a [Feature Pyramid Network](https://arxiv.org/abs/1612.03144). \n", "\n", "![Retina net](images/retinanet.png)\n", "\n", "This is a bit like a Unet in the sense we have a branch where the image is progressively reduced then another one where we upsample it again, and there are lateral connections, but we will use the feature maps produced at each level for our final predictions. Specifically, if we start with an image of size (256,256), the traditional resnet has intermediate features maps of sizes:\n", "- C1 (128, 128) \n", "- C2 (64, 64)\n", "- C3 (32, 32)\n", "- C4 (16, 16)\n", "- C5 (8, 8)\n", "To which the authors add two other features maps C6 and C7 of sizes (4,4) and (2,2) by using stride-2 convolutions. (Note that the model requires an image size of 128 at the minimum because of this.)\n", "\n", "Then we have P7 = C7 and we go down from P7 to P2 by upsampling the result of the previous P-layer and adding a lateral connection. The idea is that the last feature map P7 will be responsible to detect big objects, while one like P3 will be responsible to detect smaller objects. \n", "\n", "Each P-something feature map then goes through two subnet of four convolutional layers (with the same weights for all the feature maps), one that will be responsible for finding the category of the object and the other for drawing the bounding box. Each location in the feature map is assigned a given number of anchors (see below) so the classifier ends up with `n_anchors * n_classes` channels and the bounding box regressor with `n_anchors * 4` channels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Grab the convenience functions that helps us buil the Unet\n", "from fastai.vision.models.unet import _get_sfs_idxs, model_sizes, hook_outputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LateralUpsampleMerge(nn.Module):\n", " \"Merge the features coming from the downsample path (in `hook`) with the upsample path.\"\n", " def __init__(self, ch, ch_lat, hook):\n", " super().__init__()\n", " self.hook = hook\n", " self.conv_lat = conv2d(ch_lat, ch, ks=1, bias=True)\n", " \n", " def forward(self, x):\n", " return self.conv_lat(self.hook.stored) + F.interpolate(x, self.hook.stored.shape[-2:], mode='nearest')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RetinaNet(nn.Module):\n", " \"Implements RetinaNet from https://arxiv.org/abs/1708.02002\"\n", " def __init__(self, encoder:nn.Module, n_classes, final_bias=0., chs=256, n_anchors=9, flatten=True):\n", " super().__init__()\n", " self.n_classes,self.flatten = n_classes,flatten\n", " imsize = (256,256)\n", " sfs_szs = model_sizes(encoder, size=imsize)\n", " sfs_idxs = list(reversed(_get_sfs_idxs(sfs_szs)))\n", " self.sfs = hook_outputs([encoder[i] for i in sfs_idxs])\n", " self.encoder = encoder\n", " self.c5top5 = conv2d(sfs_szs[-1][1], chs, ks=1, bias=True)\n", " self.c5top6 = conv2d(sfs_szs[-1][1], chs, stride=2, bias=True)\n", " self.p6top7 = nn.Sequential(nn.ReLU(), conv2d(chs, chs, stride=2, bias=True))\n", " self.merges = nn.ModuleList([LateralUpsampleMerge(chs, sfs_szs[idx][1], hook) \n", " for idx,hook in zip(sfs_idxs[-2:-4:-1], self.sfs[-2:-4:-1])])\n", " self.smoothers = nn.ModuleList([conv2d(chs, chs, 3, bias=True) for _ in range(3)])\n", " self.classifier = self._head_subnet(n_classes, n_anchors, final_bias, chs=chs)\n", " self.box_regressor = self._head_subnet(4, n_anchors, 0., chs=chs)\n", " \n", " def _head_subnet(self, n_classes, n_anchors, final_bias=0., n_conv=4, chs=256):\n", " \"Helper function to create one of the subnet for regression/classification.\"\n", " layers = [conv_layer(chs, chs, bias=True, norm_type=None) for _ in range(n_conv)]\n", " layers += [conv2d(chs, n_classes * n_anchors, bias=True)]\n", " layers[-1].bias.data.zero_().add_(final_bias)\n", " layers[-1].weight.data.fill_(0)\n", " return nn.Sequential(*layers)\n", " \n", " def _apply_transpose(self, func, p_states, n_classes):\n", " #Final result of the classifier/regressor is bs * (k * n_anchors) * h * w\n", " #We make it bs * h * w * n_anchors * k then flatten in bs * -1 * k so we can contenate\n", " #all the results in bs * anchors * k (the non flatten version is there for debugging only)\n", " if not self.flatten: \n", " sizes = [[p.size(0), p.size(2), p.size(3)] for p in p_states]\n", " return [func(p).permute(0,2,3,1).view(*sz,-1,n_classes) for p,sz in zip(p_states,sizes)]\n", " else:\n", " return torch.cat([func(p).permute(0,2,3,1).contiguous().view(p.size(0),-1,n_classes) for p in p_states],1)\n", " \n", " def forward(self, x):\n", " c5 = self.encoder(x)\n", " p_states = [self.c5top5(c5.clone()), self.c5top6(c5)]\n", " p_states.append(self.p6top7(p_states[-1]))\n", " for merge in self.merges: p_states = [merge(p_states[0])] + p_states\n", " for i, smooth in enumerate(self.smoothers[:3]):\n", " p_states[i] = smooth(p_states[i])\n", " return [self._apply_transpose(self.classifier, p_states, self.n_classes), \n", " self._apply_transpose(self.box_regressor, p_states, 4),\n", " [[p.size(2), p.size(3)] for p in p_states]]\n", " \n", " def __del__(self):\n", " if hasattr(self, \"sfs\"): self.sfs.remove()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model is a bit complex, but that's not the hardest part. It will spit out an absurdly high number of predictions: for the features P3 to P7 with an image size of 256, we have `32*32 + 16*16 + 8*8 + 4*4 +2*2` locations possible in one of the five feature maps, which gives 1,364 possible detections, multiplied by the number of anchors we choose to attribute to each location (9 below), which makes 12,276 possible hits.\n", "\n", "A lot of those aren't going to correspond to any object in the picture, and we need to somehow match all those predictions to either nothing or a given bounding box in the picture." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## \"Encore\" boxes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we look at the feature map of size `4*4`, we have 16 locations numbered like below: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "torch.arange(0,16).long().view(4,4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most basic way to map one of these features with an actual area inside the image is to create the regular 4 by 4 grid. Our convention is that `y` is first (like in numpy or PyTorch), and that all coordinates are scaled from -1 to 1 (-1 being top/right, 1 being bottom/left). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_grid(size):\n", " \"Create a grid of a given `size`.\"\n", " H, W = size if is_tuple(size) else (size,size)\n", " grid = FloatTensor(H, W, 2)\n", " linear_points = torch.linspace(-1+1/W, 1-1/W, W) if W > 1 else tensor([0.])\n", " grid[:, :, 1] = torch.ger(torch.ones(H), linear_points).expand_as(grid[:, :, 0])\n", " linear_points = torch.linspace(-1+1/H, 1-1/H, H) if H > 1 else tensor([0.])\n", " grid[:, :, 0] = torch.ger(linear_points, torch.ones(W)).expand_as(grid[:, :, 1])\n", " return grid.view(-1,2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use a helper function to draw those anchors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def show_anchors(ancs, size):\n", " _,ax = plt.subplots(1,1, figsize=(5,5))\n", " ax.set_xticks(np.linspace(-1,1, size[1]+1))\n", " ax.set_yticks(np.linspace(-1,1, size[0]+1))\n", " ax.grid()\n", " ax.scatter(ancs[:,1], ancs[:,0]) #y is first\n", " ax.set_yticklabels([])\n", " ax.set_xticklabels([])\n", " ax.set_xlim(-1,1)\n", " ax.set_ylim(1,-1) #-1 is top, 1 is bottom\n", " for i, (x, y) in enumerate(zip(ancs[:, 1], ancs[:, 0])): ax.annotate(i, xy = (x,y))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "size = (4,4)\n", "show_anchors(create_grid(size), size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In practice, we use different ratios and scales of that basic grid to build our anchors, because bounding boxes aren't always a perfect square inside a grid. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_anchors(sizes, ratios, scales, flatten=True):\n", " \"Create anchor of `sizes`, `ratios` and `scales`.\"\n", " aspects = [[[s*math.sqrt(r), s*math.sqrt(1/r)] for s in scales] for r in ratios]\n", " aspects = torch.tensor(aspects).view(-1,2)\n", " anchors = []\n", " for h,w in sizes:\n", " #4 here to have the anchors overlap.\n", " sized_aspects = 4 * (aspects * torch.tensor([2/h,2/w])).unsqueeze(0)\n", " base_grid = create_grid((h,w)).unsqueeze(1)\n", " n,a = base_grid.size(0),aspects.size(0)\n", " ancs = torch.cat([base_grid.expand(n,a,2), sized_aspects.expand(n,a,2)], 2)\n", " anchors.append(ancs.view(h,w,a,4))\n", " return torch.cat([anc.view(-1,4) for anc in anchors],0) if flatten else anchors" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ratios = [1/2,1,2]\n", "scales = [1,2**(-1/3), 2**(-2/3)] \n", "#Paper used [1,2**(1/3), 2**(2/3)] but a bigger size (600) too, so the largest feature map gave anchors that cover less of the image.\n", "sizes = [(2**i,2**i) for i in range(5)]\n", "sizes.reverse() #Predictions come in the order of the smallest feature map to the biggest\n", "anchors = create_anchors(sizes, ratios, scales)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anchors.size()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's a bit less than in our computation earlier, but this is because it's for the case of (128,128) images (sizes go from (1,1) to (32,32) instead of (2,2) to (64,64))." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.cm as cmx\n", "import matplotlib.colors as mcolors\n", "from cycler import cycler\n", "\n", "def get_cmap(N):\n", " color_norm = mcolors.Normalize(vmin=0, vmax=N-1)\n", " return cmx.ScalarMappable(norm=color_norm, cmap='Set3').to_rgba\n", "\n", "num_color = 12\n", "cmap = get_cmap(num_color)\n", "color_list = [cmap(float(x)) for x in range(num_color)]\n", "\n", "def draw_outline(o, lw):\n", " o.set_path_effects([patheffects.Stroke(\n", " linewidth=lw, foreground='black'), patheffects.Normal()])\n", "\n", "def draw_rect(ax, b, color='white'):\n", " patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor=color, lw=2))\n", " draw_outline(patch, 4)\n", "\n", "def draw_text(ax, xy, txt, sz=14, color='white'):\n", " text = ax.text(*xy, txt,\n", " verticalalignment='top', color=color, fontsize=sz, weight='bold')\n", " draw_outline(text, 1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def show_boxes(boxes):\n", " \"Show the `boxes` (size by 4)\"\n", " _, ax = plt.subplots(1,1, figsize=(5,5))\n", " ax.set_xlim(-1,1)\n", " ax.set_ylim(1,-1)\n", " for i, bbox in enumerate(boxes):\n", " bb = bbox.numpy()\n", " rect = [bb[1]-bb[3]/2, bb[0]-bb[2]/2, bb[3], bb[2]]\n", " draw_rect(ax, rect, color=color_list[i%num_color])\n", " draw_text(ax, [bb[1]-bb[3]/2,bb[0]-bb[2]/2], str(i), color=color_list[i%num_color])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is an example of the 9 anchor boxes with different scales/ratios on one region of the image. Now imagine we have this at every location of each of the feature maps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "show_boxes(anchors[900:909])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each anchor, we have one class predicted by the classifier and 4 floats `p_y,p_x,p_h,p_w` predicted by the regressor. If the corresponding anchor as a center in `anc_y`, `anc_x` with dimensions `anc_h`, `anc_w`, the predicted bounding box has those characteristics:\n", "```\n", "center = [p_y * anc_h + anc_y, p_x * anc_w + anc_x]\n", "height = anc_h * exp(p_h)\n", "width = anc_w * exp(p_w)\n", "```\n", "The idea is that a prediction of `(0,0,0,0)` corresponds to the anchor itself.\n", "\n", "The next function converts the activations of the model in bounding boxes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def activ_to_bbox(acts, anchors, flatten=True):\n", " \"Extrapolate bounding boxes on anchors from the model activations.\"\n", " if flatten:\n", " acts.mul_(acts.new_tensor([[0.1, 0.1, 0.2, 0.2]])) #Can't remember where those scales come from, but they help regularize\n", " centers = anchors[...,2:] * acts[...,:2] + anchors[...,:2]\n", " sizes = anchors[...,2:] * torch.exp(acts[...,:2])\n", " return torch.cat([centers, sizes], -1)\n", " else: return [activ_to_bbox(act,anc) for act,anc in zip(acts, anchors)]\n", " return res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is an example with the 3 by 4 regular grid and random predictions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "size=(3,4)\n", "anchors = create_grid(size)\n", "anchors = torch.cat([anchors, torch.tensor([2/size[0],2/size[1]]).expand_as(anchors)], 1)\n", "activations = torch.randn(size[0]*size[1], 4) * 0.1\n", "bboxes = activ_to_bbox(activations, anchors)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "show_boxes(bboxes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This helper function changes boxes in the format center/height/width to top/left/bottom/right." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def cthw2tlbr(boxes):\n", " \"Convert center/size format `boxes` to top/left bottom/right corners.\"\n", " top_left = boxes[:,:2] - boxes[:,2:]/2\n", " bot_right = boxes[:,:2] + boxes[:,2:]/2\n", " return torch.cat([top_left, bot_right], 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now to decide which predicted bounding box will match a given ground truth object, we will compute the intersection over unions ratios between all the anchors and all the targets, then we will keep the ones that have an overlap greater than a given threshold (0.5)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def intersection(anchors, targets):\n", " \"Compute the sizes of the intersections of `anchors` by `targets`.\"\n", " ancs, tgts = cthw2tlbr(anchors), cthw2tlbr(targets)\n", " a, t = ancs.size(0), tgts.size(0)\n", " ancs, tgts = ancs.unsqueeze(1).expand(a,t,4), tgts.unsqueeze(0).expand(a,t,4)\n", " top_left_i = torch.max(ancs[...,:2], tgts[...,:2])\n", " bot_right_i = torch.min(ancs[...,2:], tgts[...,2:])\n", " sizes = torch.clamp(bot_right_i - top_left_i, min=0) \n", " return sizes[...,0] * sizes[...,1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see some results, if we have our 12 anchors from before..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "show_boxes(anchors)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "... and those targets (0. is the whole image)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "targets = torch.tensor([[0.,0.,2.,2.], [-0.5,-0.5,1.,1.], [1/3,0.5,0.5,0.5]])\n", "show_boxes(targets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then the intersections of each bboxes by each targets are:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "intersection(anchors, targets)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def IoU_values(anchors, targets):\n", " \"Compute the IoU values of `anchors` by `targets`.\"\n", " inter = intersection(anchors, targets)\n", " anc_sz, tgt_sz = anchors[:,2] * anchors[:,3], targets[:,2] * targets[:,3]\n", " union = anc_sz.unsqueeze(1) + tgt_sz.unsqueeze(0) - inter\n", " return inter/(union+1e-8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then the IoU values are." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "IoU_values(anchors, targets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we match a anchor to targets with the following rules:\n", "- for each anchor we take the maximum overlap possible with any of the targets.\n", "- if that maximum overlap is less than 0.4, we match the anchor box to background, the classifier's target will be that class\n", "- if the maximum overlap is greater than 0.5, we match the anchor box to that ground truth object. The classifier's target will be the category of that target\n", "- if the maximum overlap is between 0.4 and 0.5, we ignore that anchor in our loss computation\n", "- optionally, we force-match for each ground truth object the anchor that has the maximum overlap with it (not sure it helps)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def match_anchors(anchors, targets, match_thr=0.5, bkg_thr=0.4):\n", " \"Match `anchors` to targets. -1 is match to background, -2 is ignore.\"\n", " matches = anchors.new(anchors.size(0)).zero_().long() - 2\n", " if targets.numel() == 0: return matches\n", " ious = IoU_values(anchors, targets)\n", " vals,idxs = torch.max(ious,1)\n", " matches[vals < bkg_thr] = -1\n", " matches[vals > match_thr] = idxs[vals > match_thr]\n", " #Overwrite matches with each target getting the anchor that has the max IoU.\n", " #vals,idxs = torch.max(ious,0)\n", " #If idxs contains repetition, this doesn't bug and only the last is considered.\n", " #matches[idxs] = targets.new_tensor(list(range(targets.size(0)))).long()\n", " return matches" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our previous example, no one had an overlap > 0.5, so unless we use the special rule commented out, there are no matches." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "match_anchors(anchors, targets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With anchors very close to the targets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "size=(3,4)\n", "anchors = create_grid(size)\n", "anchors = torch.cat([anchors, torch.tensor([2/size[0],2/size[1]]).expand_as(anchors)], 1)\n", "activations = 0.1 * torch.randn(size[0]*size[1], 4)\n", "bboxes = activ_to_bbox(activations, anchors)\n", "match_anchors(anchors,bboxes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With anchors in the grey area." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anchors = create_grid((2,2))\n", "anchors = torch.cat([anchors, torch.tensor([1.,1.]).expand_as(anchors)], 1)\n", "targets = anchors.clone()\n", "anchors = torch.cat([anchors, torch.tensor([[-0.5,0.,1.,1.8]])], 0)\n", "match_anchors(anchors,targets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does the opposite of `cthw2tbr`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def tlbr2cthw(boxes):\n", " \"Convert top/left bottom/right format `boxes` to center/size corners.\"\n", " center = (boxes[:,:2] + boxes[:,2:])/2\n", " sizes = boxes[:,2:] - boxes[:,:2]\n", " return torch.cat([center, sizes], 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does the opposite of `activ_to_bbox`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def bbox_to_activ(bboxes, anchors, flatten=True):\n", " \"Return the target of the model on `anchors` for the `bboxes`.\"\n", " if flatten:\n", " t_centers = (bboxes[...,:2] - anchors[...,:2]) / anchors[...,2:] \n", " t_sizes = torch.log(bboxes[...,2:] / anchors[...,2:] + 1e-8) \n", " return torch.cat([t_centers, t_sizes], -1).div_(bboxes.new_tensor([[0.1, 0.1, 0.2, 0.2]]))\n", " else: return [activ_to_bbox(act,anc) for act,anc in zip(acts, anchors)]\n", " return res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will one-hot encode our targets with the convention that the class of index 0 is the background, which is the absence of any other classes. That is coded by a row of zeros." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def encode_class(idxs, n_classes):\n", " target = idxs.new_zeros(len(idxs), n_classes).float()\n", " mask = idxs != 0\n", " i1s = LongTensor(list(range(len(idxs))))\n", " target[i1s[mask],idxs[mask]-1] = 1\n", " return target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encode_class(LongTensor([1,2,0,1,3]),3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we are ready to build the loss function. It has two parts, one for the classifier and one for the regressor. For the regression, we will use the L1 (potentially smoothed) loss between the predicted activations for an anchor that matches a given object (we ignore the no match or matches to background) and the corresponding bounding box (after going through `bbox2activ`).\n", "\n", "For the classification, we use the focal loss, which is a variant of the binary cross entropy used when we have a lot imbalance between the classes to predict (here we will very often have to predict 'background')." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RetinaNetFocalLoss(nn.Module):\n", " \n", " def __init__(self, gamma:float=2., alpha:float=0.25, pad_idx:int=0, scales:Collection[float]=None, \n", " ratios:Collection[float]=None, reg_loss:LossFunction=F.smooth_l1_loss):\n", " super().__init__()\n", " self.gamma,self.alpha,self.pad_idx,self.reg_loss = gamma,alpha,pad_idx,reg_loss\n", " self.scales = ifnone(scales, [1,2**(-1/3), 2**(-2/3)])\n", " self.ratios = ifnone(ratios, [1/2,1,2])\n", " \n", " def _change_anchors(self, sizes:Sizes) -> bool:\n", " if not hasattr(self, 'sizes'): return True\n", " for sz1, sz2 in zip(self.sizes, sizes):\n", " if sz1[0] != sz2[0] or sz1[1] != sz2[1]: return True\n", " return False\n", " \n", " def _create_anchors(self, sizes:Sizes, device:torch.device):\n", " self.sizes = sizes\n", " self.anchors = create_anchors(sizes, self.ratios, self.scales).to(device)\n", " \n", " def _unpad(self, bbox_tgt, clas_tgt):\n", " i = torch.min(torch.nonzero(clas_tgt-self.pad_idx))\n", " return tlbr2cthw(bbox_tgt[i:]), clas_tgt[i:]-1+self.pad_idx\n", " \n", " def _focal_loss(self, clas_pred, clas_tgt):\n", " encoded_tgt = encode_class(clas_tgt, clas_pred.size(1))\n", " ps = torch.sigmoid(clas_pred.detach())\n", " weights = encoded_tgt * (1-ps) + (1-encoded_tgt) * ps\n", " alphas = (1-encoded_tgt) * self.alpha + encoded_tgt * (1-self.alpha)\n", " weights.pow_(self.gamma).mul_(alphas)\n", " clas_loss = F.binary_cross_entropy_with_logits(clas_pred, encoded_tgt, weights, reduction='sum')\n", " return clas_loss\n", " \n", " def _one_loss(self, clas_pred, bbox_pred, clas_tgt, bbox_tgt):\n", " bbox_tgt, clas_tgt = self._unpad(bbox_tgt, clas_tgt)\n", " matches = match_anchors(self.anchors, bbox_tgt)\n", " bbox_mask = matches>=0\n", " if bbox_mask.sum() != 0:\n", " bbox_pred = bbox_pred[bbox_mask]\n", " bbox_tgt = bbox_tgt[matches[bbox_mask]]\n", " bb_loss = self.reg_loss(bbox_pred, bbox_to_activ(bbox_tgt, self.anchors[bbox_mask]))\n", " else: bb_loss = 0.\n", " matches.add_(1)\n", " clas_tgt = clas_tgt + 1\n", " clas_mask = matches>=0\n", " clas_pred = clas_pred[clas_mask]\n", " clas_tgt = torch.cat([clas_tgt.new_zeros(1).long(), clas_tgt])\n", " clas_tgt = clas_tgt[matches[clas_mask]]\n", " return bb_loss + self._focal_loss(clas_pred, clas_tgt)/torch.clamp(bbox_mask.sum(), min=1.)\n", " \n", " def forward(self, output, bbox_tgts, clas_tgts):\n", " clas_preds, bbox_preds, sizes = output\n", " if self._change_anchors(sizes): self._create_anchors(sizes, clas_preds.device)\n", " n_classes = clas_preds.size(2)\n", " return sum([self._one_loss(cp, bp, ct, bt)\n", " for (cp, bp, ct, bt) in zip(clas_preds, bbox_preds, clas_tgts, bbox_tgts)])/clas_tgts.size(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a variant of the L1 loss used in several implementations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class SigmaL1SmoothLoss(nn.Module):\n", "\n", " def forward(self, output, target):\n", " reg_diff = torch.abs(target - output)\n", " reg_loss = torch.where(torch.le(reg_diff, 1/9), 4.5 * torch.pow(reg_diff, 2), reg_diff - 1/18)\n", " return reg_loss.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Defining the Learner" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ratios = [1/2,1,2]\n", "scales = [1,2**(-1/3), 2**(-2/3)]\n", "#scales = [1,2**(1/3), 2**(2/3)] for bigger size" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encoder = create_body(models.resnet50, cut=-2)\n", "model = RetinaNet(encoder, data.c, final_bias=-4)\n", "crit = RetinaNetFocalLoss(scales=scales, ratios=ratios)\n", "learn = Learner(data, model, loss_func=crit)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why `final_bias=-4`? That's because we want the network to predict background easily at the beginning (since it's the most common class). At first the final convolution of the classifier is initialized with weights=0 and that bias, so it will return -4 for everyone. If go though a sigmoid " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "torch.sigmoid(tensor([-4.]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see it'll give a corresponding probability of 0.02 roughly. \n", "\n", "Then, for transfer learning/discriminative LRs, we need to define how to split between body and custom head." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def retina_net_split(model):\n", " groups = [list(model.encoder.children())[:6], list(model.encoder.children())[6:]]\n", " return groups + [list(model.children())[1:]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = learn.split(retina_net_split)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can train as usual!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.freeze()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.lr_find()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.recorder.plot(skip_end=5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit_one_cycle(5, 1e-4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage1-128')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.unfreeze()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit_one_cycle(10, slice(1e-6, 5e-5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage2-128')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.data = get_data(32,192)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.freeze()\n", "learn.lr_find()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.recorder.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit_one_cycle(5, 1e-4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage1-192')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.unfreeze()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit_one_cycle(10, slice(1e-6, 5e-5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage2-192')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.data = get_data(24,256)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.freeze()\n", "learn.fit_one_cycle(5, 1e-4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage1-256')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.unfreeze()\n", "learn.fit_one_cycle(10, slice(1e-6, 5e-5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.save('stage2-256')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = learn.load('stage2-256')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img,target = next(iter(data.valid_dl))\n", "with torch.no_grad():\n", " output = learn.model(img)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we need to remove the padding that was added to collate our targets together." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def unpad(tgt_bbox, tgt_clas, pad_idx=0):\n", " i = torch.min(torch.nonzero(tgt_clas-pad_idx))\n", " return tlbr2cthw(tgt_bbox[i:]), tgt_clas[i:]-1+pad_idx" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we process the outputs of the model: we convert the activations of the regressor to bounding boxes and the predictions to probabilities, only keeping those above a given threshold." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def process_output(output, i, detect_thresh=0.25):\n", " \"Process `output[i]` and return the predicted bboxes above `detect_thresh`.\"\n", " clas_pred,bbox_pred,sizes = output[0][i], output[1][i], output[2]\n", " anchors = create_anchors(sizes, ratios, scales).to(clas_pred.device)\n", " bbox_pred = activ_to_bbox(bbox_pred, anchors)\n", " clas_pred = torch.sigmoid(clas_pred)\n", " detect_mask = clas_pred.max(1)[0] > detect_thresh\n", " bbox_pred, clas_pred = bbox_pred[detect_mask], clas_pred[detect_mask]\n", " bbox_pred = tlbr2cthw(torch.clamp(cthw2tlbr(bbox_pred), min=-1, max=1)) \n", " scores, preds = clas_pred.max(1)\n", " return bbox_pred, scores, preds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Helper functions to plot the results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def _draw_outline(o:Patch, lw:int):\n", " \"Outline bounding box onto image `Patch`.\"\n", " o.set_path_effects([patheffects.Stroke(\n", " linewidth=lw, foreground='black'), patheffects.Normal()])\n", "\n", "def draw_rect(ax:plt.Axes, b:Collection[int], color:str='white', text=None, text_size=14):\n", " \"Draw bounding box on `ax`.\"\n", " patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor=color, lw=2))\n", " _draw_outline(patch, 4)\n", " if text is not None:\n", " patch = ax.text(*b[:2], text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')\n", " _draw_outline(patch,1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def show_preds(img, output, idx, detect_thresh=0.25, classes=None):\n", " bbox_pred, scores, preds = process_output(output, idx, detect_thresh)\n", " bbox_pred, preds, scores = bbox_pred.cpu(), preds.cpu(), scores.cpu()\n", " t_sz = torch.Tensor([*img.size])[None].float()\n", " bbox_pred[:,:2] = bbox_pred[:,:2] - bbox_pred[:,2:]/2\n", " bbox_pred[:,:2] = (bbox_pred[:,:2] + 1) * t_sz/2\n", " bbox_pred[:,2:] = bbox_pred[:,2:] * t_sz\n", " bbox_pred = bbox_pred.long()\n", " _, ax = plt.subplots(1,1)\n", " for bbox, c, scr in zip(bbox_pred, preds, scores):\n", " img.show(ax=ax)\n", " txt = str(c.item()) if classes is None else classes[c.item()+1]\n", " draw_rect(ax, [bbox[1],bbox[0],bbox[3],bbox[2]], text=f'{txt} {scr:.2f}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And let's have a look at one picture." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "idx = 0\n", "img = data.valid_ds[idx][0]\n", "show_preds(img, output, idx, detect_thresh=0.3, classes=data.classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like a lot of our anchors are detecting kind of the same object. We use an algorithm called Non-Maximum Suppression to remove near-duplicates: going from the biggest score predicted to the lowest, we take the corresponding bounding boxes and remove all the bounding boxes down the list that have an IoU > 0.5 with this one. We continue the process until we have reached the end of the list." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def nms(boxes, scores, thresh=0.3):\n", " idx_sort = scores.argsort(descending=True)\n", " boxes, scores = boxes[idx_sort], scores[idx_sort]\n", " to_keep, indexes = [], torch.LongTensor(range_of(scores))\n", " while len(scores) > 0:\n", " to_keep.append(idx_sort[indexes[0]])\n", " iou_vals = IoU_values(boxes, boxes[:1]).squeeze()\n", " mask_keep = iou_vals < thresh\n", " if len(mask_keep.nonzero()) == 0: break\n", " boxes, scores, indexes = boxes[mask_keep], scores[mask_keep], indexes[mask_keep]\n", " return LongTensor(to_keep)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def process_output(output, i, detect_thresh=0.25):\n", " clas_pred,bbox_pred,sizes = output[0][i], output[1][i], output[2]\n", " anchors = create_anchors(sizes, ratios, scales).to(clas_pred.device)\n", " bbox_pred = activ_to_bbox(bbox_pred, anchors)\n", " clas_pred = torch.sigmoid(clas_pred)\n", " detect_mask = clas_pred.max(1)[0] > detect_thresh\n", " bbox_pred, clas_pred = bbox_pred[detect_mask], clas_pred[detect_mask]\n", " bbox_pred = tlbr2cthw(torch.clamp(cthw2tlbr(bbox_pred), min=-1, max=1)) \n", " if clas_pred.numel() == 0: return [],[],[]\n", " scores, preds = clas_pred.max(1)\n", " return bbox_pred, scores, preds" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def show_preds(img, output, idx, detect_thresh=0.25, classes=None, ax=None):\n", " bbox_pred, scores, preds = process_output(output, idx, detect_thresh)\n", " if len(scores) != 0:\n", " to_keep = nms(bbox_pred, scores)\n", " bbox_pred, preds, scores = bbox_pred[to_keep].cpu(), preds[to_keep].cpu(), scores[to_keep].cpu()\n", " t_sz = torch.Tensor([*img.size])[None].float()\n", " bbox_pred[:,:2] = bbox_pred[:,:2] - bbox_pred[:,2:]/2\n", " bbox_pred[:,:2] = (bbox_pred[:,:2] + 1) * t_sz/2\n", " bbox_pred[:,2:] = bbox_pred[:,2:] * t_sz\n", " bbox_pred = bbox_pred.long()\n", " if ax is None: _, ax = plt.subplots(1,1)\n", " img.show(ax=ax)\n", " for bbox, c, scr in zip(bbox_pred, preds, scores):\n", " txt = str(c.item()) if classes is None else classes[c.item()+1]\n", " draw_rect(ax, [bbox[1],bbox[0],bbox[3],bbox[2]], text=f'{txt} {scr:.2f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def show_results(learn, start=0, n=5, detect_thresh=0.35, figsize=(10,25)):\n", " x,y = learn.data.one_batch(DatasetType.Valid, cpu=False)\n", " with torch.no_grad():\n", " z = learn.model.eval()(x)\n", " _,axs = plt.subplots(n, 2, figsize=figsize)\n", " for i in range(n):\n", " img,bbox = learn.data.valid_ds[start+i]\n", " img.show(ax=axs[i,0], y=bbox)\n", " show_preds(img, z, start+i, detect_thresh=detect_thresh, classes=learn.data.classes, ax=axs[i,1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = learn.load('stage2-256')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "show_results(learn, start=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## mAP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A metric often used for this kind of task is the mean Average Precision (our mAP). It relies on computing the cumulated precision and recall for each class, then tries to compute the area under the precision/recall curve we can draw." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_predictions(output, idx, detect_thresh=0.05):\n", " bbox_pred, scores, preds = process_output(output, idx, detect_thresh)\n", " if len(scores) == 0: return [],[],[]\n", " to_keep = nms(bbox_pred, scores)\n", " return bbox_pred[to_keep], preds[to_keep], scores[to_keep]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def compute_ap(precision, recall):\n", " \"Compute the average precision for `precision` and `recall` curve.\"\n", " recall = np.concatenate(([0.], list(recall), [1.]))\n", " precision = np.concatenate(([0.], list(precision), [0.]))\n", " for i in range(len(precision) - 1, 0, -1):\n", " precision[i - 1] = np.maximum(precision[i - 1], precision[i])\n", " idx = np.where(recall[1:] != recall[:-1])[0]\n", " ap = np.sum((recall[idx + 1] - recall[idx]) * precision[idx + 1])\n", " return ap" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def compute_class_AP(model, dl, n_classes, iou_thresh=0.5, detect_thresh=0.35, num_keep=100):\n", " tps, clas, p_scores = [], [], []\n", " classes, n_gts = LongTensor(range(n_classes)),torch.zeros(n_classes).long()\n", " with torch.no_grad():\n", " for input,target in progress_bar(dl):\n", " output = model(input)\n", " for i in range(target[0].size(0)):\n", " bbox_pred, preds, scores = get_predictions(output, i, detect_thresh)\n", " tgt_bbox, tgt_clas = unpad(target[0][i], target[1][i])\n", " if len(bbox_pred) != 0 and len(tgt_bbox) != 0:\n", " ious = IoU_values(bbox_pred, tgt_bbox)\n", " max_iou, matches = ious.max(1)\n", " detected = []\n", " for i in range_of(preds):\n", " if max_iou[i] >= iou_thresh and matches[i] not in detected and tgt_clas[matches[i]] == preds[i]:\n", " detected.append(matches[i])\n", " tps.append(1)\n", " else: tps.append(0)\n", " clas.append(preds.cpu())\n", " p_scores.append(scores.cpu())\n", " n_gts += (tgt_clas.cpu()[:,None] == classes[None,:]).sum(0)\n", " tps, p_scores, clas = torch.tensor(tps), torch.cat(p_scores,0), torch.cat(clas,0)\n", " fps = 1-tps\n", " idx = p_scores.argsort(descending=True)\n", " tps, fps, clas = tps[idx], fps[idx], clas[idx]\n", " aps = []\n", " #return tps, clas\n", " for cls in range(n_classes):\n", " tps_cls, fps_cls = tps[clas==cls].float().cumsum(0), fps[clas==cls].float().cumsum(0)\n", " if tps_cls.numel() != 0 and tps_cls[-1] != 0:\n", " precision = tps_cls / (tps_cls + fps_cls + 1e-8)\n", " recall = tps_cls / (n_gts[cls] + 1e-8)\n", " aps.append(compute_ap(precision, recall))\n", " else: aps.append(0.)\n", " return aps" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "L = compute_class_AP(learn.model, data.valid_dl, data.c-1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for ap,cl in zip(L, data.classes[1:]): print(f'{cl}: {ap:.6f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for ap,cl in zip(L, data.classes[1:]): print(f'{cl}: {ap:.6f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for ap,cl in zip(L, data.classes[1:]): print(f'{cl}: {ap:.6f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for ap,cl in zip(L, data.classes[1:]): print(f'{cl}: {ap:.6f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for ap,cl in zip(L, data.classes[1:]): print(f'{cl}: {ap:.6f}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }