EDGE DEVICE OPTIMIZATIONS

Quantization for low-power architectures

We’ve developed a groundbreaking quantization algorithm for FPGA, enabling us to perform inference using just 5 bits. The compact model has a total size of only 600 Kb. For safety solutions, we trained YOLOX-S and quantized it into int8, achieving a speed of 10 FPS on the RV1126 chip with an input resolution of 480x832. The resulting mAP is an impressive 80.6%. For the GAP8 platform by GreenWaves Technologies, we trained a people detection model and quantized it into int8. The full size of the model is a mere 700 Kb, with an outstanding resulting mAP of 96%.

Quantization for low-power architectures
OpenCV.ai background