The plots above report the energy and latency measurements of the Mobilenet V1 and V2 families running on GAP8. The processing power of GAP8 can scale to cover a broad range of problem sizes (expressed in terms of the number of MACs and parameters). The energy cost increases up to a few tens of mJ as the accuracy (measured on Imagenet) increases, while showing >1 FPS. Application users can find the best operating point for their applications depending on their particular requirements.
Try it on your own
Are you interested in building a DL-based battery-operated application? To enable easy porting of highly efficient DL models to GAP8, we have developed an easy-to-use toolset that is part of our GAP SDK. This allows you to convert DL models from Ttflite format to C code implementing them on GAP8. We even include the GAP platform simulator which allows you to run the examples in simulation on your PC without one of our development boards.
A good way to start is to take a look at our image classification examples. After installing the GAP SDK, which includes all the needed tools (NNtool, GAP AutoTiler), you can run several benchmark networks on the GAP platform simulator, simply by:
$ git clone firstname.lastname@example.org:GreenWaves-Technologies/image_classification_networks.git
$ make clean all run platform=gvsoc
The command above will feed a TFLite Mobilenet V1 model to our NN toolset, which we call GAPflow, to generate and run C code implementing the inference network. Multiple models have been already ported to the flow, as listed in the repository.
GAPflow accelerates the deployment of NNs on GAP while ensuring high-performance and low-energy consumption on GAP processors.
In fact, extremely energy-efficient hardware is only part of the edge intelligence story. The GAPflow toolset assists programmers in achieving short time-to-prototype of DL-based applications by generating GAP-optimized C code based on the provided DL model.
You can get more, in-depth information on GAPflow from this video tutorial.