Artificial Intelligence; deep-learning; digital; image-sensor; IoT; integrated-circuit; neuromorphic engineering; bio-inspired computing; convolutional neural network; smart camera; event camera; sparse computing; power management
Lungu Iulia Alexandra, Aimar Alessandro, Hu Yuhuang, Delbruck Tobi, Liu Shih-Chii (2020), Siamese Networks for Few-Shot Learning on Edge Embedded Devices, in IEEE Journal on Emerging and Selected Topics in Circuits and Systems
, 10(4), 488-497.
Pinero-Fuentes Enrique, Rios-Navarro Antonio, Tapiador-Morales Ricardo, Delbruck Tobi, Linares-Barranco Alejandro (2020), Live Demonstration: CNN Edge Computing for Mobile Robot Navigation, in 2020 IEEE International Symposium on Circuits and Systems (ISCAS)
, SevillaIEEE, New Jersey, USA.
Lungu Iulia Alexandra, Hu Yuhuang, Liu Shih-Chii (2020), Multi-Resolution Siamese Networks for One-Shot Learning, in 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)
, Genova, ItalyIEEE, New Jersey, USA.
There is increasing opportunity for pervasive sensing in internet of things (IoT) applications. Nearly all modern developments of advanced visual perception are based on deep convolutional neural networks (CNNs), reaching close to human accuracy or even surpassing it in specific tasks, but by consuming many orders of magnitude more energy than biology. Current state of the art (SOA) real time vision systems are mainly aiming at high-throughput high-sample rate applications in autonomous driving, manufacturing, quality control, etc. These systems combine conventional image sensors and processors, which are mainly high performance graphics processing units (GPUs), adapted from gaming. Because of the large power consumption tied to such systems, there are currently no integrated solutions for ultra-low power visual perception using deep convolutional neural networks on the market. If these were available, they would open a large number of application areas that are currently not possible. Employing a sub-mW vision sensor with a mW-range deep CNN perception processor would enable always-on object detection and localization in small battery-powered devices, allowing intelligent systems to be easily set up and run for extended periods of time in environments without access to power supply or a dedicated network infrastructure. The goal of the VIPS project is to develop an ultra-low power visual perception system for battery powered or fully self-sustainable applications with visual scene analysis and decision making ability; its intelligent sensing will be extremely parsimonious in waking up expensive post processing or communication.This VIPS system will allow applications that have previously not been possible without setting up costly power supply and communication infrastructures. It will make enable reliable people counting for building automation, allow for active advertisement interaction and eye contact analysis, and help to increase security and safety in public transportation platforms in train stations and pedestrian crossings without infringing privacy by sending sensitive data to the cloud (everything is processed onboard the device). In homes, it could detect falling and fallen elderly people, and could enable smarter robot vacuum cleaners that avoid rooms with people in them, and cables, paper, and clothing on the floor. For context-aware surveillance systems such as automatic door systems, to save valuable heating energy, it could more intelligently detect the intent of the pedestrians. On mobile devices to assist blind people, it would enable devices like smart canes. In order to successfully accomplish this challenging project, a highly qualified consortium has been established between the Institute of Neuroinformatics (INI) at UZH/ETH, a leader in event sensors and efficiently accelerating deep neural network inference, and CSEM, contributing its expertise in the design of highly integrated chips and vision sensors at ultralow power consumption for battery powered and autonomous applications.