SBO project on Resource-Efficient Deep Learning

While deep learning is quickly gaining traction in server platforms, we struggle to deploy this machine learning method on embedded devices. Researches reach out to you to join the advisory committee of their project and tackle this challenges.

by KU Leuven

Deep learning revolutionized many disciplines of signal processing, going from advanced image processing of AR/VR, over natural language processing to visuomotor robot control policies. While the fundamental concepts were already articulated in the 1940s, the major breakthroughs were only realized in the past decade. Besides algorithmic innovations, the recency of the breakthrough can be attributed to the fact that only now the necessary computational hardware acceleration and large corpora of labelled data have become available. Also smartphones and embedded devices are increasingly equipped with dedicated DSPs for embedded AI, often named “tensor processing units”.

Despite the success of deep learning, executing these deep learning workloads on embedded resource constrained devices still poses severe challenges. These devices suffer at the same time from extremely tight energy consumption specs, low memory footprints, and a lack of large (labeled) data sets. Connecting these embedded systems with the cloud however also brings many drawbacks such as the need for constant and high-bandwidth connectivity with the cloud, increased latency for execution, and privacy and security concerns of processing sensitive data (e.g. health-related data). As a result innovation is required towards efficient on-chip learning from small, ill-labeled data sets. These workloads can moreover be tackled by either devices individually, or by collaborative learning devices organizing themselves in a distributed and possibly hierarchical setting.

Research teams from Ghent University, the University of Antwerp and KU Leuven are now teaming up in an FWO SBO project proposal to tackle exactly these challenges. The project will look at novel ways to tackle the resource bottlenecks in deep learning on embedded devices. We will design efficient training routines and appropriate hardware platforms for continuous on-device learning with few labelled data. We will look at ways to optimize the distribution of intelligence in connected multi-device environments across resource-rich and resource-constrained devices.

The researchers now reach out to the members of DSP valley to join the advisory committee of the SBO project.

Committee members will be regularly updated on the research progress, can propose research directions (e.g. via specific use cases or needs), and will be invited to workshops and hands-on tutorials, providing not only employee training but also unique opportunities for networking with other Flemish and Brussels companies. For a company to join the committee, the FWO mandates a limited financial contribution that depends on the size of your company. For an SME, the yearly fee is € 250, for large companies the fee is € 1000 per year.

For more information or indicate interest, please contact, preferably before April 15th.