Skip to content

Commit

Permalink
readme update
Browse files Browse the repository at this point in the history
  • Loading branch information
chrischoy committed Jan 22, 2020
1 parent 4197883 commit c52ba2f
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 20 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
### Changed

- `SparseTensor` additional coords.device guard
- `MinkowskiConvolution`, `Minkowski*Pooling` output coordinates will be equal to the input coordinates if stride == 1. Before this change, they generated output coordinates defined first for a speicific tensor stride.
- `MinkowskiConvolution`, `Minkowski*Pooling` output coordinates will be equal to the input coordinates if stride == 1. Before this change, they generated output coordinates previously defined for a specific tensor stride.
- `MinkowskiUnion` and `Ops.cat` will take a variable number of sparse tensors not a list of sparse tensors
- Namespace cleanup
- Fix global in out map with uninitialized global map
Expand Down
11 changes: 7 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The Minkowski Engine is an auto-differentiation library for sparse tensors. It s
## Building a Neural Network on a Sparse Tensor

The Minkowski Engine provides APIs that allow users to build a neural network on a sparse tensor. Then, how dow we define convolution/pooling/transposed operations on a sparse tensor?
Visually, a convolution on a sparse tensor is similar to that on a dense tensor. However, on a sparse tensor, we compute convolution output on a few specified points. For more information, please visit [convolution on a sparse tensor](https://stanfordvl.github.io/MinkowskiEngine/convolution_on_sparse.html)
Visually, a convolution on a sparse tensor is similar to that on a dense tensor. However, on a sparse tensor, we compute convolution outputs on a few specified points. For more information, please visit [convolution on a sparse tensor](https://stanfordvl.github.io/MinkowskiEngine/convolution_on_sparse.html)

| Dense Tensor | Sparse Tensor |
|:---------------------------------:|:---------------------------------:|
Expand Down Expand Up @@ -187,10 +187,13 @@ class ExampleNetwork(ME.MinkowskiNetwork):

### Running the Examples

After installing the package, run `python -m examples.example` in the package root directory.
For indoor semantic segmentation. run `python -m examples.indoor` in the package root directory.
After installing the package, run `python -m examples.example` in the package root directory. There are many more examples, but here's a gist of some of exciting examples. To run them, simply type the command below an example image in terminal.

![](https://stanfordvl.github.io/MinkowskiEngine/_images/segmentation.png)
| Example | Figures and Commands |
|:---------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Semantic Segmentation | <p align="center"> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/segmentation.png" width="256"> </p> <br /> `python -m examples.indoor` |
| Classification | ![](https://stanfordvl.github.io/MinkowskiEngine/_images/classification_3d_net.png) <br /> `python -m examples.modelnet40` |
| Reconstruction | <p align="center"> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/generative_3d_net.png"> <br /> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/generative_3d_results.gif" width="256"> </p> <br /> `python -m examples.reconstruction` |


## Discussion and Documentation
Expand Down
17 changes: 5 additions & 12 deletions docs/demo/sparse_tensor_reconstruction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,7 @@ In this page, we will go over a simple demo example that trains a 3D
convolutional neural network that reconstructs a 3D sparsity pattern from an
one-hot vector. This is similar to the `Octree Generating Networks, ICCV'17
<https://arxiv.org/abs/1703.09438>`_. The input one-hot vector indicates a 3D
Computer Aided Design (CAD) chairs from the ModelNet40 dataset. Here, we use a
small subset.
Computer Aided Design (CAD) chair index from the ModelNet40 dataset.

We use :attr:`MinkowskiEngine.MinkowskiConvolutionTranspose` along with
:attr:`MinkowskiEngine.MinkowskiPruning` to sequentially upsample a voxel by a
Expand Down Expand Up @@ -35,23 +34,23 @@ During a forward pass, we create two paths for 1) the main features and 2) a spa
out = pruning(out, out_cls > 0)
After multiple steps of upsampling and carving out unnecessary voxels, we have a target sparse tensor. The final reconstruction captures the geometry very accurately. Here, we visualized the hierarchical reconstruction result at each step: upsampling, pruning.
Until the input sparse tensor reaches the target resolution, the network repeats a series of upsampling and pruning that removes out unnecessary voxels. We visualize the results on the following figure. Note that the final reconstruction captures the target geometry very accurately. We also visualized the hierarchical reconstruction process of upsampling and pruning.

.. image:: ../images/generative_3d_results.gif


Running the Example
-------------------

To train a network, go to the Minkowski Engine root directory, and type
To train a network, go to the Minkowski Engine root directory, and type:


.. code-block::
python -m examples.reconstruction --train
To visualize the network prediction after you finished training, type
To visualize network predictions, or to try out a pretrained model, type:

.. code-block::
Expand All @@ -60,14 +59,8 @@ To visualize the network prediction after you finished training, type
.. image:: ../images/demo_reconstruction.png

The program will visualize two 3D shapes. One one the left is the target 3D
The program will visualize two 3D shapes. One on the left is the target 3D
shape, one on the right is the reconstructed network prediction.


Using Pretrained Weights
------------------------

You can also download a pretrained model from here: `modelnet_reconstruction.pth <>`_.

The entire code can be found at `example/reconstruction.py
<https://github.com/StanfordVL/MinkowskiEngine/blob/master/examples/reconstruction.py>`_.
9 changes: 6 additions & 3 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,10 +187,13 @@ class ExampleNetwork(ME.MinkowskiNetwork):

### Running the Examples

After installing the package, run `python -m examples.example` in the package root directory.
For indoor semantic segmentation. run `python -m examples.indoor` in the package root directory.
After installing the package, run `python -m examples.example` in the package root directory. There are many more examples, but here's a gist of some of exciting examples. To run them, simply type the command below an example image in terminal.

![](https://stanfordvl.github.io/MinkowskiEngine/_images/segmentation.png)
| Example | Figures and Code |
|:---------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Semantic Segmentation | <p align="center"> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/segmentation.png" width="256"> </p> <br /> `python -m examples.indoor` |
| Classification | ![](https://stanfordvl.github.io/MinkowskiEngine/_images/classification_3d_net.png) <br /> `python -m examples.modelnet40` |
| Reconstruction | <p align="center"> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/generative_3d_net.png"> <br /> <img src="https://stanfordvl.github.io/MinkowskiEngine/_images/generative_3d_results.gif" width="256"> </p> ![]() <br /> `python -m examples.reconstruction` |


## Discussion and Documentation
Expand Down

0 comments on commit c52ba2f

Please sign in to comment.