Skip to content

Commit

Permalink
format all
Browse files Browse the repository at this point in the history
  • Loading branch information
houjing.huang committed Jan 23, 2019
1 parent 082be6f commit ad35ec5
Show file tree
Hide file tree
Showing 32 changed files with 1,615 additions and 211 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@ encoding.egg-info/
.*
/encoding/lib/*/*.so
/encoding/lib/*/*.o
*.ninja
*.ninja
/tmp
24 changes: 24 additions & 0 deletions Fix_Bugs_of_Pytorch_Encoding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
## Pytorch-Encoding

After installing prerequisites [as this instruction](Install_Prerequisite_for_Pytorch_Encoding.md), we clone Pytorch-Encoding
```bash
# Tested with commit ce461da
git clone https://github.com/zhanghang1989/PyTorch-Encoding.git
```

Then, according to [this issue](https://github.com/zhanghang1989/PyTorch-Encoding/issues/161), replace `#include <torch/extension.h>` with `#include <torch/serialize/tensor.h>` in all `encoding/lib/*/*.cpp` and `encoding/lib/*/*.cu` files. Also add `#include <torch/serialize/tensor.h>` to `encoding/lib/gpu/operator.h`.

The Pytorch-Encoding doc requires `python setup.py install`. However, it's not necessary. You can just add the package path to `$PYTHONPATH` or `sys.path`.

Run the examples
```bash
export CUDA_HOME=/mnt/data-1/data/houjing.huang/Software/cuda-9.0
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}

python scripts/prepare_pcontext.py
python test.py --dataset PContext --model-zoo Encnet_ResNet50_PContext --eval
```

## Misc

- The [default ResNet](https://github.com/zhanghang1989/PyTorch-Encoding/blob/ce461dae8d088253dcd9818d2999d4049bce3493/encoding/models/resnet.py) in Pytorch-Encoding is different than pytorch [torchvision](https://github.com/pytorch/vision/blob/98ca260bc834ec94a8143e4b5cfe9516b0b951a2/torchvision/models/resnet.py). The former provides `deep_base`, `dilation`, `multi_grid` options.
122 changes: 122 additions & 0 deletions Install_Prerequisite_for_Pytorch_Encoding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
We need
- pytorch
- cuda
- ninja

# CUDA 9.0

You can install [CUDA9.0](https://developer.nvidia.com/cuda-90-download-archive), [cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl) to local directory.

First install CUDA by running the downloaded executable.

Then download and extract cuDNN.

```
cudnn-9.0-linux-x64-v7.4.2.24
|-- include
| `-- cudnn.h
|-- lib64
| |-- libcudnn.so -> libcudnn.so.7
| |-- libcudnn.so.7 -> libcudnn.so.7.4.2
| |-- libcudnn.so.7.4.2
| `-- libcudnn_static.a
`-- NVIDIA_SLA_cuDNN_Support.txt
```

Move cuDNN files to CUDA directory by
```bash
cp cudnn-9.0-linux-x64-v7.4.2.24/include/* cuda-9.0/include/
cp cudnn-9.0-linux-x64-v7.4.2.24/lib64/* cuda-9.0/lib64/
```

Then download and extract NCCL.

```
nccl_2.3.7-1+cuda9.0_x86_64
|-- include
| `-- nccl.h
|-- lib
| |-- libnccl.so -> libnccl.so.2
| |-- libnccl.so.2 -> libnccl.so.2.3.7
| |-- libnccl.so.2.3.7
| `-- libnccl_static.a
`-- LICENSE.txt
```

Move NCCL files to CUDA directory by
```bash
cp nccl_2.3.7-1+cuda9.0_x86_64/include/* cuda-9.0/include/
cp nccl_2.3.7-1+cuda9.0_x86_64/lib/* cuda-9.0/lib64/
```

# Pytorch 1.0

It's required to install pytorch from source.

Setup your python environment before installing pytorch. Anaconda is required in the following example. Change `WORKING_DIR`, `CUDA_HOME` to your paths and run the following commands.

```bash
WORKING_DIR=<your-directory-to-save-intermediate-results-of-installing-pytorch>
TORCH_DIR_NAME=pytorch_v1.0.0
mkdir -p ${WORKING_DIR}

pip uninstall -y torch

# Install basic dependencies
conda install --yes numpy pyyaml mkl mkl-include setuptools cmake cffi typing
conda install --yes -c mingfeima mkldnn
# Add LAPACK support for the GPU
conda install --yes -c pytorch magma-cuda90

cd ${WORKING_DIR}
rm -rf ${TORCH_DIR_NAME}
# Tested with commit db5d313
git clone --recursive --single-branch --branch v1.0.0 https://github.com/pytorch/pytorch.git ${TORCH_DIR_NAME}
cd ${TORCH_DIR_NAME}

rm -rf build
rm -rf torch.egg-info
export CUDA_HOME=<your-cuda-directory>
export USE_SYSTEM_NCCL=1
export NCCL_LIB_DIR=${CUDA_HOME}/lib64 # For CUDA version > 8.0, you have to download NCCL lib independently
export NCCL_INCLUDE_DIR=${CUDA_HOME}/include
export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" # [anaconda root directory]

python setup.py install 2>&1 | tee ${WORKING_DIR}/install_pytorch.log
cd ${WORKING_DIR}
python test_data_parallel.py 2>&1 | tee test_pytorch_data_parallel.log
```

The contents of `test_data_parallel.py` in above commands is

```python
import torch
import torch.nn as nn
from torch.nn.parallel import DataParallel

model = nn.Linear(10, 20).cuda()
x = torch.ones(100, 10).float().cuda()
model_w = DataParallel(model, device_ids=[0,1,2,3])
x = model_w(x)
# x = model(x)
print(x.size()) # It should be (100, 20)
```

# ninja

Download and extract ninja 1.8.2 from <https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip>. Add ninja to environment variable `PATH`.

# Environment Variables

After installing pytorch, cuda and ninja, modify and add following lines to your `.bashrc` file.

```bash
export anaconda_home=<your-anaconda-directory>
export PATH=${anaconda_home}/bin:${PATH}
export LD_LIBRARY_PATH=${anaconda_home}/lib:${LD_LIBRARY_PATH}

export CUDA_HOME=<your-cuda-directory>
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}

export PATH=<your-ninja-directory>:${PATH}
```
127 changes: 103 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,114 @@
# PyTorch-Encoding
# About

created by [Hang Zhang](http://hangzh.com/)
This is a fork from [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding) (refer to it for the original README). In this repository, we use [DANet](https://github.com/junfu1115/DANet) to train a part segmentation model on [COCO Densepose Dataset](http://densepose.org/), for use in our person re-identification paper [EANet](https://github.com/huanghoujing/EANet).

## [Documentation](http://hangzh.com/PyTorch-Encoding/)
# Installation

- Please visit the [**Docs**](http://hangzh.com/PyTorch-Encoding/) for detail instructions of installation and usage.
First install prerequsites [as this instruction](Install_Prerequisite_for_Pytorch_Encoding.md).

- Please visit the [link](http://hangzh.com/PyTorch-Encoding/experiments/segmentation.html) to examples of semantic segmentation.
Then clone this project
```bash
git clone https://github.com/huanghoujing/PyTorch-Encoding.git
```

# Dataset

## Citations
We transform COCO Densepose Dataset as described in paper [EANet](https://github.com/huanghoujing/EANet). For details, please refer to `scripts/coco_part`. **Note** that import and data paths in these scripts are not re-arranged and will cause error when executed.

**Context Encoding for Semantic Segmentation** [[arXiv]](https://arxiv.org/pdf/1803.08904.pdf)
[Hang Zhang](http://hangzh.com/), [Kristin Dana](http://eceweb1.rutgers.edu/vision/dana.html), [Jianping Shi](http://shijianping.me/), [Zhongyue Zhang](http://zhongyuezhang.com/), [Xiaogang Wang](http://www.ee.cuhk.edu.hk/~xgwang/), [Ambrish Tyagi](https://scholar.google.com/citations?user=GaSWCoUAAAAJ&hl=en), [Amit Agrawal](http://www.amitkagrawal.com/)
The resulting segmentation dataset can be downloaded from [Baidu Cloud](https://pan.baidu.com/s/1Mm2gWO-Xg3wiyCd6SEAWaA#list/path=%2Fsharelink1629242940-281396814376268%2FEANet%2Fdataset%2Fcoco&parentPath=%2Fsharelink1629242940-281396814376268) or [Google Drive](https://drive.google.com/drive/folders/1gITlG2MfhJXUpfEPt6ohJCBgqigchabW).

Prepare the dataset to have following structure
```
@InProceedings{Zhang_2018_CVPR,
author = {Zhang, Hang and Dana, Kristin and Shi, Jianping and Zhang, Zhongyue and Wang, Xiaogang and Tyagi, Ambrish and Agrawal, Amit},
title = {Context Encoding for Semantic Segmentation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
${project_dir}/dataset/coco_part
images
masks_7_parts
masks -> masks_7_parts # This is a link created by `ln -s masks_7_parts masks`
train.txt
train_market1501_cuhk03_duke_style.txt
val.txt
```

**Deep TEN: Texture Encoding Network** [[arXiv]](https://arxiv.org/pdf/1612.02844.pdf)
[Hang Zhang](http://hangzh.com/), [Jia Xue](http://jiaxueweb.com/), [Kristin Dana](http://eceweb1.rutgers.edu/vision/dana.html)
# Examples

Download our trained model ([Baidu Cloud](https://pan.baidu.com/s/1Mm2gWO-Xg3wiyCd6SEAWaA#list/path=%2Fsharelink1629242940-281396814376268%2FEANet%2Fpart_segmentation_model&parentPath=%2Fsharelink1629242940-281396814376268) or [Google Drive](https://drive.google.com/drive/folders/1suBZk1WhpiS5PdB3GFySEEkC2FjzPQpL)) to `${project_dir}/exp/EANet_paper_ps_model/model_best.pth.tar`.

## Infer and Visualize Your Images

Specify your image directory `dir_of_im_to_vis` and an output directory `vis_save_dir` in the following command. It will select at most `max_num_vis` images to infer and save the visualization.

```bash
CUDA_VISIBLE_DEVICES=0 \
python experiments/coco_part/train.py \
--dir_of_im_to_vis YOUR_IMAGE_DIRECTORY \
--vis_save_dir OUTPUT_DIRECTORY \
--resume exp/EANet_paper_ps_model/model_best.pth.tar \
--only-vis \
--max_num_vis 128
```
@InProceedings{Zhang_2017_CVPR,
author = {Zhang, Hang and Xue, Jia and Dana, Kristin},
title = {Deep TEN: Texture Encoding Network},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}

`misc/example_visualization.png` is an example result.

## Infer and Save Prediction

Specify your image directory `dir_of_im_to_infer` and an output directory `infer_save_dir` in the following command. Prediction will be saved to `infer_save_dir`, with same sub paths as original images.

```bash
CUDA_VISIBLE_DEVICES=0 \
python experiments/coco_part/train.py \
--resume exp/EANet_paper_ps_model/model_best.pth.tar \
--only-infer \
--dir_of_im_to_infer YOUR_IMAGE_DIRECTORY \
--infer_save_dir OUTPUT_DIRECTORY
```

For each image, the prediction is saved as a single-channel PNG image, with the same resolution as the input image. Each pixel value of the output image is its part label. Refer to [this link](https://github.com/huanghoujing/EANet#part-segmentation-label-format) for part index. Optionally, you can use script `experiments/coco_part/colorize_pred_mask.py` to colorize the predicted masks for visualization.

## Validate on COCO Part Val Set

The following command validates on val set, with single scale and flipping, but without cropping. You should get result `pixAcc: 0.9034, mIoU: 0.6670`.

```bash
CUDA_VISIBLE_DEVICES=0 \
python experiments/coco_part/train.py \
--resume exp/EANet_paper_ps_model/model_best.pth.tar \
--only-val
```


## Training

Since person images are much smaller compared with other segmentation tasks, we can use a single GPU while maintaining a large batch size. For example, when we set batch size to 16, the GPU usage is about 5600MB.

```bash
CUDA_VISIBLE_DEVICES=0 \
python experiments/coco_part/train.py \
--norm_layer bn \
--train-split train \
--batch-size 16 \
--test-batch-size 16 \
--exp_dir exp/train
```

You can also try multi GPUs and synchronized BN by setting `norm_layer` to `sync_bn`

```bash
CUDA_VISIBLE_DEVICES=0,1 \
python experiments/coco_part/train.py \
--norm_layer sync_bn \
--train-split train \
--batch-size 16 \
--test-batch-size 16 \
--exp_dir exp/train
```

# Citation

If you find our work useful, please kindly cite our paper:
```
@article{huang2018eanet,
title={EANet: Enhancing Alignment for Cross-Domain Person Re-identification},
author={Huang, Houjing and Yang, Wenjie and Chen, Xiaotang and Zhao, Xin and Huang, Kaiqi and Lin, Jinbin and Huang, Guan and Du, Dalong},
journal={arXiv preprint arXiv:1812.11369},
year={2018}
}
```
2 changes: 1 addition & 1 deletion encoding/datasets/coco_part.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
###########################################################################
# Created by: Houjing Huang
# Copyright (c) 2018
# Copyright (c) 2019
###########################################################################

import os
Expand Down
Loading

0 comments on commit ad35ec5

Please sign in to comment.