Our second generation GAP9 processor revolutionises embedded machine learning in wearable devices with ultra low latency and energy inference on images and sounds.

GAP9 platform gives headroom in both energy and processing power that can be used to develop innovative new features for wearables, with no compromise in area, cost or energy.

Our sophisticated toolset and GAP9’s inherent homogeneity and scalable performance makes development significantly easier.

GAP9 – best in class for the new generation of wearables

Best-in-class CNN engine (NE16) for AI based or driven algorithms

Low latency and energy inference on image or sound

  • Person detection
  • Face detection / identification
  • Speaker detection / identification
  • Voice driven user interface
  • Abnormal sound detection

Ultra fast time to first image – 100ms

Massive compute headroom for new features such as neural network based audio scene detection and real time deep noise reduction

State of the art toolchain integrated with widely used development tools

Usable as co-processor or main controller

Tiny WL-CSP package (3.7×3.7mm) to fit small devices