MIVisionX toolkit is a comprehensive computer vision and machine intelligence libraries, utilities and applications bundled into a single toolkit.
- AMD OpenVX
- AMD OpenVX Extensions
- Applications
- Neural Net Model Compiler
- Samples
- Toolkit
- Utilities
- Pre-requisites
- Build MIVisionX
- Docker
- Release Notes
AMD OpenVX (amd_openvx) is a highly optimized open source implementation of the Khronos OpenVX computer vision specification. It allows for rapid prototyping as well as fast execution on a wide range of computer hardware, including small embedded x86 CPUs and large workstation discrete GPUs.
The OpenVX framework provides a mechanism to add new vision functions to OpenVX by 3rd party vendors. This project has below OpenVX modules and utilities to extend amd_openvx project, which contains the AMD OpenVX Core Engine.
- amd_loomsl: AMD Radeon LOOM stitching library for live 360 degree video applications
- amd_nn: OpenVX neural network module
- amd_opencv: OpenVX module that implements a mechanism to access OpenCV functionality as OpenVX kernels
MIVisionX has a number of applications (apps) built on top of OpenVX modules, it uses AMD optimized libraries to build applications which can be used to prototype or used as models to develop a product.
- Cloud Inference Application
- External Applications
Model compiler generates efficient inference libraries from pre-trained neural net models.
MIVisionX Toolkit, is a comprehensive set of help tools for neural net creation, development, training and deployment. The Toolkit provides you with help tools to design, develop, quantize, prune, retrain, and infer your neural network work in any framework. The Toolkit is designed to help you deploy your work to any AMD or 3rd party hardware, from embedded to servers.
MIVisionX provides you with tools for accomplishing your tasks throughout the whole neural net life-cycle, from creating a model to deploying them for your target platforms.
- inference_generator: generate inference library from pre-trained CAFFE models
- loom_shell: an interpreter to prototype 360 degree video stitching applications using a script
- RunVX: command-line utility to execute OpenVX graph described in GDF text file
- RunCL: command-line utility to build, execute, and debug OpenCL programs
If you're interested in Neural Network Inference, start with the sample cloud inference application in apps folder.
Inference Application Development Workflow | Sample Inference Application |
---|---|
![]() |
![]() |
Refer to Wiki page for further details.
- CPU: SSE4.1 or above CPU, 64-bit
- GPU: Radeon Instinct or Vega Family of Products (16GB recommended)
- CMake 2.8 or newer download
- Qt Creator for annInferenceApp
- protobuf for inference_generator
- install
libprotobuf-dev
andprotobuf-compiler
needed for vx_nn
- install
- OpenCV 3 (optional) download for vx_opencv
- Set OpenCV_DIR environment variable to OpenCV/build folder
For convenience of the developer, we here provide the setup script which will install all the dependencies required by this project.
- Ubuntu
16.04
/18.04
or CentOS7.5
/7.6
- ROCm supported hardware
- ROCm
MIVisionX-setup.py - This scipts builds all the prerequisites required by MIVisionX. The setup script creates a deps folder and installs all the prerequisites, this script only needs to be executed once. If -d option for directory is not given the script will install deps folder in '~/' directory by default, else in the user specified folder.
usage:
python MIVisionX-setup.py -s [sudo password - required] -d [setup directory - optional (default:~/)] -m [MIOpen Version - optional (default:1.6.0)]
Refer to Wiki page for developer instructions.
- Install ROCm
- git clone, build and install other ROCm projects (using
cmake
and% make install
) in the below order for vx_nn.- rocm-cmake
- MIOpenGEMM
- MIOpen -- make sure to use
-DMIOPEN_BACKEND=OpenCL
option with cmake
- install protobuf
- install OpenCV
- git clone this project using
--recursive
option so that correct branch of the deps project is cloned automatically. - build and install (using
cmake
and% make install
)- executables will be placed in
bin
folder - libraries will be placed in
lib
folder - the installer will copy all executables into
/opt/rocm/mivisionx/bin
and libraries into/opt/rocm/mivisionx/lib
- the installer also copies all the OpenVX and module header files into
/opt/rocm/mivisionx/include
folder
- executables will be placed in
- add the installed library path to LD_LIBRARY_PATH environment variable (default
/opt/rocm/mivisionx/lib
) - add the installed executable path to PATH environment variable (default
/opt/rocm/mivisionx/bin
)
- Install ROCm
- Use the below commands to setup and build MIVisionX
git clone --recursive https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX.git
cd MIVisionX
python MIVisionX-setup.py -s [sudo password - required] -d [setup directory - optional (default:~/)] -m [MIOpen Version - optional (default:1.6.0)]
mkdir build
cd build
cmake ../
make -j8
- build annInferenceApp.pro using Qt Creator
- or use annInferenceApp.py for simple tests
- Use loom.sln to build x64 platform
MIVisionX provides developers with docker images for Ubuntu 16.04, Ubuntu 18.04, CentOS 7.5, & CentOS 7.6. Using docker images developers can quickly prototype and build applications without having to be locked into a single system setup or lose valuable time figuring out the dependencies of the underlying software.
- Ubuntu
16.04
- rocm supported hardware
- Step 1 - Install rocm-dkms
sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot
wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
sudo apt update
sudo apt install rocm-dkms
sudo reboot
- Step 2 - Setup Docker
sudo apt-get install curl
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
apt-cache policy docker-ce
sudo apt-get install -y docker-ce
sudo systemctl status docker
- Step 3 - Get Docker Image
sudo docker pull kiritigowda/mivisionx-ubuntu-16.04
- Step 4 - Run the docker image
sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host kiritigowda/mivisionx-ubuntu-16.04
- Optional: Map localhost directory on the docker image
- option to map the localhost directory with trained caffe models to be accessed on the docker image.
- usage: -v {LOCAL_HOST_DIRECTORY_PATH}:{DOCKER_DIRECTORY_PATH}
sudo docker run -it -v /home/:/root/hostDrive/ --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host kiritigowda/mivisionx-ubuntu-16.04
Layer name |
---|
Activation |
Argmax |
Batch Normalization |
Concat |
Convolution |
Deconvolution |
Fully Connected |
Local Response Normalization (LRN) |
Pooling |
Scale |
Slice |
Softmax |
Tensor Add |
Tensor Convert Depth |
Tensor Convert from Image |
Tensor Convert to Image |
Tensor Multiply |
Tensor Subtract |
Upsample Nearest Neighborhood |
- ROCm - 1.8.151 performance degradation
- Linux: Ubuntu -
16.04
/18.04
& CentOS -7.5
/7.6
- ROCm: rocm-dkms -
1.9.307
- rocm-cmake - github master:ac45c6e
- MIOpenGEMM - 1.1.5
- MIOpen - 1.6.0
- Protobuf - V3.5.2
- OpenCV - 3.3.0
- Dependencies for all the above packages