Skip to content

Computer Vision deployment tools for dummies and experts. CVU aims at making CV pipelines easier to build and consistent around platforms, devices, and models.

License

Notifications You must be signed in to change notification settings

BlueMirrors/cvu

Repository files navigation

CVU: Computer Vision Utils

CodeFactor stability-alpha made-with-python Downloads Open In Colab


Computer Vision pipeline framework with SOTA components for dummies and experts.

Whether you are developing an end-to-end computer vision pipeline or just looking to use some quick computer vision in your project, CVU can help! Designed to be used by both the expert and the novice, CVU aims at making CV pipelines easier to build and consistent around platforms, devices and models.

Code Example

pip install cvu-python



CVU lets you create end-to-end pipelines with various SOTA/customizable components. With a focus on a common component interface, you naturally create a loosely coupled pipeline with most of the implementation details hidden. Because of this, you can combine any number of CVU components, in any order, to create a pipeline of your need. You can set and switch between one or multiple pipeline input sources (eg. an image, folder, video, or live stream) and output sinks (eg. video file, image with results drawn, TXT/JSON dumps, etc.)

It also comes with optional + customizable default settings which can run a benchmark on your platform/machine to optimally choose dependencies based on accuracy and latency preferences. CVU can also automatically switch/select target devices (CPU, GPU, TPU), computation backends (TF, PyTorch, ONNX, TensorRT, TFLite), and models (small, big, etc.) based on where the pipeline is running.

Currently, CVU only provides Object Detection, but we are in the process to support Segmentation, Background removal, Tracking, and Image text matching out of the box.

Index 📋


CVU Says Hi!

Index

How many installation-steps and lines of code will you need to run object detection on a video with a TensorRT backend? How complicated is it be to test that pipeline in Colab?

With CVU , you just need the following! No extra installation steps needed to run on Colab, just pip install our tool, and you're all set to go!

from vidsz.opencv import Reader, Writer
from cvu.detector import Detector

# set video reader and writer, you can also use normal OpenCV
reader = Reader("example.mp4")
writer = Writer(reader, name="output.mp4")


# create detector with tensorrt backend having fp16 precision by default
detector = Detector(classes="coco", backend="tensorrt")

# process frames
for frame in reader:

    # make predictions.
    preds = detector(frame)

    # draw it on frame
    preds.draw(frame)

    # write it to output
    writer.write(frame)

writer.release()
reader.release()

Want to use less lines of code? How about this!

from cvu.detector import Detector
from vidsz.opencv import Reader, Writer

detector = Detector(classes="coco", backend="tensorrt")


with Reader("example.mp4") as reader:
    with Writer(reader, name="output.mp4") as writer:
        writer.write_all(map(lambda frame:detector(frame).draw(frame), reader))

Want to switch to non-cuda device? Just set device="cpu", and backend to "onnx", "tflite", "torch" or "tensorflow".


detector = Detector(classes="coco", backend="onnx", device="cpu")

Want to use TPU? Just set device="tpu" and choose a supported backend (only "tensorflow" supported as of the latest release)


detector = Detector(classes="coco", backend="tensorflow", device="tpu")

You can change devices, platforms and backends as much as you want, without having to change your pipeline.


Devices

Index

Support Info

Following is latest support matrix

Device TensorFlow Torch TFLite ONNX TensorRT
GPU
CPU
TPU

Recommended Backends (in order)

Based on FPS performance and various benchmarks

  • GPU: TensorRT > Torch > ONNX > TensorFlow
  • CPU: ONNX > TFLite > TensorFlow > Torch
  • TPU: TensorFlow



References