Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IOG serverless function + some fixes #2578

Merged
merged 33 commits into from
Feb 10, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
40275cf
Initial version of Inside Outside Guidance serverless function
Oct 12, 2020
f8bfb5f
Fix function.yaml for IOG
Oct 12, 2020
5e8ccd9
Add "processorMountMode: volume" to restart containers after reboot (…
Oct 13, 2020
5e4c8e0
Fix crash in IOG serverless function (it doesn't work right now as well)
Oct 13, 2020
3cbd3c0
Fix for points translation from crop to image.
Oct 13, 2020
afc7eef
Add crop_box parameter to IOG function
Oct 22, 2020
f538f30
Merge remote-tracking branch 'origin/develop' into nm/serverlss_tutorial
Oct 22, 2020
d23e76b
Update nuclio dashboard till 1.5.1 version
Oct 22, 2020
119d30f
Support neg_points in interactors
Oct 23, 2020
208528d
Add dummy serverless function for optimization
Oct 23, 2020
3f647d0
Remove dummy serverless function and update the list of available ser…
Oct 23, 2020
8f7a055
Improved deployment process of serverless functions
Oct 24, 2020
1b14f38
Improve installation.md for serverless functions.
Oct 24, 2020
ec08d4a
Minor changes in doc for serverless
Oct 24, 2020
bbd5dbc
Merge remote-tracking branch 'origin/develop' into nm/serverlss_tutorial
Oct 24, 2020
b8951c6
Merge remote-tracking branch 'origin/develop' into nm/serverlss_tutorial
Nov 10, 2020
d3b1df3
Merge remote-tracking branch 'origin/develop' into nm/serverlss_tutorial
Dec 15, 2020
b7b4d79
Revert the tutorial.
Dec 15, 2020
303eea5
Merge remote-tracking branch 'origin/develop' into nm/serverless_iog
Feb 3, 2021
5b33aca
Merged develop
Feb 5, 2021
9f25e8b
Merge branch 'nm/serverless_iog' of github.com:openvinotoolkit/cvat i…
Feb 9, 2021
63eb39d
Fix codacy issues
Feb 9, 2021
2defc22
Update CHANGELOG, use NUCLIO_DASHBOARD_DEFAULT_FUNCTION_MOUNT_MODE as
Feb 9, 2021
d64c0ef
Removed volume mode from functions (it is handled by
Feb 9, 2021
16fefe7
Disable warning from markdown linter about max line length for a table.
Feb 9, 2021
97e2984
Revert wrong changes
Feb 9, 2021
cd58eb2
Merge remote-tracking branch 'origin/develop' into nm/serverless_iog
Feb 9, 2021
7f7d6f9
Reverted changes in requirements for cvat (numpy).
Feb 9, 2021
049c643
Dashboard env variable doesn't work by a reason. Added back mountMode
Feb 10, 2021
34177f3
Fix IOG function with conda environment
Feb 10, 2021
ece913b
Fix tensorflow matterport/mask_rcnn
Feb 10, 2021
b65c03d
Merge remote-tracking branch 'origin/develop' into nm/serverless_iog
Feb 10, 2021
e76be34
Bump version of cvat-ui.
Feb 10, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Initial version of Inside Outside Guidance serverless function
  • Loading branch information
Nikita Manovich committed Oct 12, 2020
commit 40275cf4b6acad06bc6618c25a4ad64fbceb0e4b
83 changes: 83 additions & 0 deletions components/serverless/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,86 @@
# From project root directory
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d
```

### Tutorial how to add your own DL model for automatic annotation

Let's try to integration [IOG algorithms for interactive segmentation](https://github.com/shiyinzhang/Inside-Outside-Guidance).

First of all let's run the model on your local machine. The repo doesn't have good instructions and look
like uses outdated versions of packages. The building process is going to be funny. For old version of
pytorch packages it is better to use conda. See below a possible instructions how to run the model on a
local machine.

```bash
git clone https://github.com/shiyinzhang/Inside-Outside-Guidance
cd Inside-Outside-Guidance/
conda create --name iog python=3.6
conda activate iog
conda install pytorch=0.4 torchvision=0.2 -c pytorch
conda install -c conda-forge pycocotools
conda install -c conda-forge opencv
conda install -c conda-forge scipy
```

Download weights from google drive: https://github.com/shiyinzhang/Inside-Outside-Guidance#pretrained-models
Also we will need VOCtrainval_11-May-2012.tar dataset for evaluation: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar

Modify `mypath.py` in accordance with instructions inside the repo. In my case `git diff` below:

```python
diff --git a/mypath.py b/mypath.py
index 0df1565..cd0fa3f 100644
--- a/mypath.py
+++ b/mypath.py
@@ -3,15 +3,15 @@ class Path(object):
@staticmethod
def db_root_dir(database):
if database == 'pascal':
- return '/path/to/PASCAL/VOC2012' # folder that contains VOCdevkit/.
+ return '/Users/nmanovic/Workspace/datasets/VOCtrainval_11-May-2012/' # folder that contains VOCdevkit/.

elif database == 'sbd':
- return '/path/to/SBD/' # folder with img/, inst/, cls/, etc.
+ return '/Users/nmanovic/Workspace/datasets/SBD/dataset/' # folder with img/, inst/, cls/, etc.
else:
print('Database {} not available.'.format(database))
raise NotImplementedError

@staticmethod
def models_dir():
- return '/path/to/models/resnet101-5d3b4d8f.pth'
+ return '/Users/nmanovic/Workspace/Inside-Outside-Guidance/IOG_PASCAL_SBD.pth'
#'resnet101-5d3b4d8f.pth' #resnet50-19c8e357.pth'
```

It looks like need to update `test.py` to run it without `train.py` script.

```python
diff --git a/test.py b/test.py
index f85969a..8e481d0 100644
--- a/test.py
+++ b/test.py
@@ -51,9 +51,10 @@ net = Network(nInputChannels=nInputChannels,num_classes=1,
freeze_bn=False)

# load pretrain_dict
-pretrain_dict = torch.load(os.path.join(save_dir, 'models', modelName + '_epoch-' + str(resume_epoch - 1) + '.pth'))
-print("Initializing weights from: {}".format(
- os.path.join(save_dir, 'models', modelName + '_epoch-' + str(resume_epoch - 1) + '.pth')))
+#pretrain_dict = torch.load(os.path.join(save_dir, 'models', modelName + '_epoch-' + str(resume_epoch - 1) + '.pth'))
+#print("Initializing weights from: {}".format(
+# os.path.join(save_dir, 'models', modelName + '_epoch-' + str(resume_epoch - 1) + '.pth')))
+pretrain_dict = torch.load('/Users/nmanovic/Workspace/Inside-Outside-Guidance/IOG_PASCAL_SBD.pth')
net.load_state_dict(pretrain_dict)
net.to(device)
```

Now it is possible to run `test.py` and it will generate results inside `./run_0/Results` directory.
It is already a great progress. We can run the pretrained model and get results. Next step is to
implement a simple script which will accept an image with a bounding box and generate a mask for the
object. Let's do that.

```bash
cp test.py model_handler.py
```

65 changes: 65 additions & 0 deletions serverless/pytorch/shiyinzhang/iog/nuclio/function.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
metadata:
name: pth.shiyinzhang.iog
namespace: cvat
annotations:
name: IOG
type: interactor
spec:
framework: pytorch
min_pos_points: 1

spec:
description: Interactive Object Segmentation with Inside-Outside Guidance
runtime: "python:3.6"
handler: main:handler
eventTimeout: 30s
env:
- name: PYTHONPATH
value: /opt/nuclio/iog

build:
image: cvat/pth.shiyinzhang.iog
baseImage: continuumio/miniconda3

directives:
preCopy:
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: git clone https://github.com/shiyinzhang/Inside-Outside-Guidance.git iog
- kind: WORKDIR
value: /opt/nuclio/iog
- kind: ENV
value: fileid=1Lm1hhMhhjjnNwO4Pf7SC6tXLayH2iH0l
- kind: ENV
value: filename=IOG_PASCAL_SBD.pth
- kind: RUN
value: curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}"
- kind: RUN
value: curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${fileid}" -o ${filename}
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: conda create -y -n iog python=3.6
- kind: SHELL
value: '["conda", "run", "-n", "iog", "/bin/bash", "-c"]'
- kind: RUN
value: conda install pytorch=0.4 torchvision=0.2 -c pytorch
- kind: RUN
value: conda install -c conda-forge pycocotools opencv scipy
- kind: ENTRYPOINT
value: '["conda", "run", "-n", "iog"]'

triggers:
myHttpTrigger:
maxWorkers: 2
kind: "http"
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB

platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
33 changes: 33 additions & 0 deletions serverless/pytorch/shiyinzhang/iog/nuclio/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Copyright (C) 2020 Intel Corporation
#
# SPDX-License-Identifier: MIT

import json
import base64
from PIL import Image
import io
from model_handler import ModelHandler

def init_context(context):
context.logger.info("Init context... 0%")

model = ModelHandler()
setattr(context.user_data, 'model', model)

context.logger.info("Init context...100%")

def handler(context, event):
context.logger.info("call handler")
data = event.body
pos_points = data["points"][:1]
neg_points = data["points"][1:]
threshold = data.get("threshold", 0.9)
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
image = Image.open(buf)

polygon = context.user_data.model.handle(image, pos_points,
neg_points, threshold)
return context.Response(body=json.dumps(polygon),
headers={},
content_type='application/json',
status_code=200)
117 changes: 117 additions & 0 deletions serverless/pytorch/shiyinzhang/iog/nuclio/model_handler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Copyright (C) 2020 Intel Corporation
#
# SPDX-License-Identifier: MIT

import numpy as np
import os
import cv2
import torch
from torchvision import transforms
from dataloaders import custom_transforms as tr
from networks.mainnetwork import Network
from PIL import Image
from dataloaders import helpers
import os

def convert_mask_to_polygon(mask):
mask = np.array(mask, dtype=np.uint8)
cv2.normalize(mask, mask, 0, 255, cv2.NORM_MINMAX)
contours = None
if int(cv2.__version__.split('.')[0]) > 3:
contours = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[0]
else:
contours = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)[1]

contours = max(contours, key=lambda arr: arr.size)
if contours.shape.count(1):
contours = np.squeeze(contours)
if contours.size < 3 * 2:
raise Exception('Less then three point have been detected. Can not build a polygon.')

polygon = []
for point in contours:
polygon.append([int(point[0]), int(point[1])])

return polygon

class ModelHandler:
def __init__(self):
base_dir = os.environ.get("MODEL_PATH", "/opt/nuclio/iog")
model_path = os.path.join(base_dir, "IOG_PASCAL_SBD.pth")
self.device = torch.device("cpu")

# Number of input channels (RGB + heatmap of IOG points)
self.net = Network(nInputChannels=5, num_classes=1, backbone='resnet101',
output_stride=16, sync_bn=None, freeze_bn=False)

pretrain_dict = torch.load(model_path)
self.net.load_state_dict(pretrain_dict)
self.net.to(self.device)
self.net.eval()

def handle(self, image, pos_points, neg_points, threshold):
with torch.no_grad():
input_bbox = cv2.boundingBox(np.array(neg_points))
# extract a crop from the image
crop_padding = 30
crop_bbox = [
max(input_bbox[0] - crop_padding, 0),
max(input_bbox[1] - crop_padding, 0),
min(input_bbox[2] + crop_padding, image.width - 1),
min(input_bbox[3] + crop_padding, image.height - 1)
]
crop_shape = (
int(crop_bbox[2] - crop_bbox[0] + 1), # width
int(crop_bbox[3] - crop_bbox[1] + 1), # height
)

# try to use crop_from_bbox(img, bbox, zero_pad) here
input_crop = np.array(image.crop(crop_bbox)).astype(np.float32)

# resize the crop
input_crop = cv2.resize(input_crop, (512, 512), interpolation=cv2.INTER_NEAREST)
crop_scale = (512 / crop_shape[0], 512 / crop_shape[1])

def translate_points(points):
points = [
((p[0] - crop_bbox[0]) * crop_scale[0], # x
(p[1] - crop_bbox[1]) * crop_scale[1]) # y
for p in points]

return points

pos_points = translate_points(pos_points)
neg_points = translate_points(neg_points)

# FIXME: need to constract correct gt (pos_points can be more than 1)
iog_image = helpers.make_gt(input_crop, pos_points + neg_points)

# Convert iog_image to an image (0-255 values)
iog_image = 255. * (iog_image - iog_image.min()) / (iog_image.max() - iog_image.min() + 1e-10)

# Concatenate input crop and IOG image
input_blob = np.concatenate((input_crop, iog_image), axis=2)

# numpy image: H x W x C
# torch image: C X H X W
input_blob = input_blob.transpose((2, 0, 1))
# batch size is 1
input_blob = np.array([input_blob])
input_tensor = torch.from_numpy(input_blob)

input_tensor = input_tensor.to(self.device)
output_mask = self.net.forward(input_tensor)[4]
output_mask = output_mask.to(self.device)
pred = np.transpose(output_mask.data.numpy()[0, :, :, :], (1, 2, 0))
pred = pred > threshold
pred = np.squeeze(pred)

# Convert a mask to a polygon
polygon = convert_mask_to_polygon(pred)
polygon = [
(int(p[0] + crop_bbox[0]), int(p[1] + crop_bbox[1]))
for p in polygon
]

return polygon