Beyond Ultra-Low Power Claims: Measuring How Many ML Features Always-On Systems Can Truly Support

Session details:

As devices become smarter and more autonomous, always-on intelligence at super low power has emerged as one of the toughest challenges in edge AI. Wearables, hearables, remotes, and sensors must continuously run multiple ML features — not just one model at a time — all while staying within sub-milliwatt budgets.

Yet today’s “ultra-low power” claims often fail to reflect real-world conditions. Many measurements highlight only inference cores or idle states, leaving system designers and OEMs without a clear way to evaluate what can actually be supported continuously.

This talk introduces a new perspective on benchmarking that shifts the question from raw performance to system-level capability:

“How many ML features can an always-on system support continuously under a fixed super low power budget?”
We explore this idea through feature bundles — for example:

Audio-2: Wake Word + Glass Break
Audio-5: Wake Word + Voice Command + Acoustic Event Detection + Acoustic Scene Detection + Speaker ID
Multimodal-3: Audio Event + Gesture Recognition + Motion Detection
By framing efficiency in terms of features-per-budget, this approach offers OEMs a practical, transparent way to compare solutions, beyond isolated kernel benchmarks. The goal is to spark discussion toward standardized bundles and metrics that reflect what always-on AI systems truly need to deliver.

Format :
Keynote
Tags:
Wearables , Smart Homes
Track:
Edge, AI, and Data Analytics
Level:
Advanced