ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Jun 29, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Open source real-time translation app for Android that runs locally
Tengine is a lite, high performance, modular inference engine for embedded device
🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN. Contains YOLOv5, YOLOv6, YOLOX, YOLOR, FaceDet, HeadSeg, HeadPose, Matting etc. Engine: ONNXRuntime, MNN.
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
An OBS plugin for removing background in portrait images (video), making it easy to replace the background when recording or streaming.
Speech-to-text, text-to-speech, and speaker recognition using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift, Dart, JavaScript, Flutter
Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
nGraph has moved to OpenVINO
Machine learning on FPGAs using HLS
Samples and Tools for Windows ML.
Machine learning framework for both deep learning and traditional algorithms
Add a description, image, and links to the onnx topic page so that developers can more easily learn about it.
To associate your repository with the onnx topic, visit your repo's landing page and select "manage topics."