Skip to content

Commit

Permalink
update README file
Browse files Browse the repository at this point in the history
  • Loading branch information
luigifreda committed Feb 24, 2019
1 parent 1a27e01 commit 9b25f34
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Author: [Luigi Freda](https://www.luigifreda.com)

**pySLAM** is a *'toy'* implementation of a *Visual Odometry (VO)* pipeline in Python. It has been developed for **educational purposes** for a [computer vision class](https://as-ai.org/visual-perception-and-spatial-computing/) I taught. I started developing it for fun, during my free-time, taking inspiration from some repos available on the web.
**pySLAM** is a *'toy'* implementation of a *Visual Odometry (VO)* pipeline in Python. I released it for **educational purposes**, for a [computer vision class](https://as-ai.org/visual-perception-and-spatial-computing/) I taught. I started developing it for fun, during my free-time, taking inspiration from some repos available on the web.

Main Scripts:
* `main_vo.py` combines the simplest VO ingredients without performing any image point triangulation or windowed bundle adjustment. At each step $k$, `main_vo.py` estimates the current camera pose $C_k$ with respect to the previous one $C_{k-1}$. The inter frame pose estimation returns $[R_{k-1,k},t_{k-1,k}]$ with $||t_{k-1,k}||=1$. With this very basic computation, you need to use a ground truth in order to recover a correct inter-frame scale $s$ and estimate a meaningful trajectory by composing $C_k = C_{k-1} * [R_{k-1,k}, s t_{k-1,k}]$. This script is a first start to understand the basics of inter frame feature tracking and camera pose estimation.
Expand Down

0 comments on commit 9b25f34

Please sign in to comment.