Skip to content

Commit

Permalink
faster CPU inference, minor fixes
Browse files Browse the repository at this point in the history
- in pointnet: decreased the batch size for CPU inference, now much faster
- in visualize: automatically look for feature_supervizion if features is not found
- in graph_processing: allows to compute spg on both train and test sets
- in README: incorporated @atineoSE fix for the recurring make error
  • Loading branch information
loicland committed Jul 29, 2019
1 parent 3cc9e38 commit 344e1bf
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 9 deletions.
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,15 +70,17 @@ cd build
cmake .. -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.6m.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.6m -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make
```
The code was tested on Ubuntu 14.04 with Python 3.6 and PyTorch 0.2 to 1.0.
The code was tested on Ubuntu 14 and 16 with Python 3.5 to 3.8 and PyTorch 0.2 to 1.1.

### Troubleshooting

Common sources of error and how to fix them:
- $CONDA_ENV is not well defined : define it or replace $CONDA_ENV by the absolute path of your environment (find it with ```locate anaconda```)
Common sources of errors and how to fix them:
- $CONDAENV is not well defined : define it or replace $CONDAENV by the absolute path of your conda environment (find it with ```locate anaconda```)
- anaconda uses a different version of python than 3.6m : adapt it in the command. Find which version of python conda is using with ```locate anaconda3/lib/libpython```
- you are using boost 1.62 or older: update it
- cut pursuit did not download: manually clone it in the ```partition``` folder or add it as a submodule as proposed in the requirements, point 4.
- error in make: `'numpy/ndarrayobject.h' file not found`: set symbolic link to python site-package with `sudo ln -s $CONDAENV/lib/python3.7/site-packages/numpy/core/include/numpy $CONDAENV/include/numpy`


## Running the code

Expand Down
2 changes: 1 addition & 1 deletion learning/pointnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ def run_batch(self, model, clouds, clouds_global, *excess):
def run_batch_cpu(self, model, clouds, clouds_global, *excess):
""" Evaluates the cloud on CPU, but put the values in the CPU as soon as they are computed"""
#cudnn cannot handle arrays larger than 2**16 in one go, uses batch
batch_size = 2**16-1
batch_size = 2**10-1
n_batches = int(clouds.shape[0]/batch_size)
emb_total = self.run_batch(model, clouds[:batch_size,:,:], clouds_global[:batch_size,:]).cpu()
for i in range(1,n_batches+1):
Expand Down
5 changes: 2 additions & 3 deletions partition/visualize.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,9 @@
if args.dataset == 'custom_dataset':
n_labels = 10
#---load the values------------------------------------------------------------
if args.supervized_partition:
fea_file = root + "features/" + folder + file_name + '.h5'
if not os.path.isfile(fea_file):
fea_file = root + "features_supervision/" + folder + file_name + '.h5'
else:
fea_file = root + "features/" + folder + file_name + '.h5'
spg_file = root + "superpoint_graphs/" + folder + file_name + '.h5'
ply_folder = root + "clouds/" + folder
ply_file = ply_folder + file_name
Expand Down
5 changes: 3 additions & 2 deletions supervized_partition/graph_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,11 +329,12 @@ def create_sema3d_datasets(args, test_seed_offset=0):
testlist += [path + 'train/' + f + '.h5' for f in train_names]
if 'val' in args.db_test_name:
testlist += [path + 'train/' + f + '.h5' for f in valid_names]
elif 'testred' in args.db_test_name:
if 'testred' in args.db_test_name:
testlist += [f for f in glob.glob(path + 'test_reduced/*.h5')]
elif 'testfull' in args.db_test_name:
if 'testfull' in args.db_test_name:
testlist += [f for f in glob.glob(path + 'test_full/*.h5')]


return tnt.dataset.ListDataset(trainlist,
functools.partial(graph_loader, train=True, args=args, db_path=args.ROOT_PATH)), \
tnt.dataset.ListDataset(testlist,
Expand Down

0 comments on commit 344e1bf

Please sign in to comment.