Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
-
Updated
May 11, 2021 - Python
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE 📐
Half-precision floating point types f16 and bf16 for Rust.
Stage 3 IEEE 754 half-precision floating-point ponyfill
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
🎯 Accumulated Gradients for TensorFlow 2
half float library for C and for z80
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
TFLite applications: Optimized .tflite models (i.e. lightweight and low latency) and code to run directly on your Microcontroller!
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Square root of half-precision floating-point epsilon.
Utility for converting 16-bit floats
Add a description, image, and links to the float16 topic page so that developers can more easily learn about it.
To associate your repository with the float16 topic, visit your repo's landing page and select "manage topics."