Paper ID: 2409.19432 • Published Sep 28, 2024
MicroFlow: An Efficient Rust-Based Inference Engine for TinyML
Matteo Carnelos, Francesco Pasti, Nicola Bellotto
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
In recent years, there has been a significant interest in developing machine
learning algorithms on embedded systems. This is particularly relevant for bare
metal devices in Internet of Things, Robotics, and Industrial applications that
face limited memory, processing power, and storage, and which require extreme
robustness. To address these constraints, we present MicroFlow, an open-source
TinyML framework for the deployment of Neural Networks (NNs) on embedded
systems using the Rust programming language. The compiler-based inference
engine of MicroFlow, coupled with Rust's memory safety, makes it suitable for
TinyML applications in critical environments. The proposed framework enables
the successful deployment of NNs on highly resource-constrained devices,
including bare-metal 8-bit microcontrollers with only 2kB of RAM. Furthermore,
MicroFlow is able to use less Flash and RAM memory than other state-of-the-art
solutions for deploying NN reference models (i.e. wake-word and person
detection), achieving equally accurate but faster inference compared to
existing engines on medium-size NNs, and similar performance on bigger ones.
The experimental results prove the efficiency and suitability of MicroFlow for
the deployment of TinyML models in critical environments where resources are
particularly limited.