VLSI architectures for Artificial Intelligence and Machine Learning

Description

Machine Learning (ML) and Artificial Intelligence (AI) applications require the use of more and more advanced algorithms for extracting meaningful information from increasingly (and sometimes incredibly) large sets of data. While the algorithmic part has recently seen significant advances (as for instance through the adoption of Deep or Convolutional Neural Networks), it sometimes comes at the cost of high computational complexity which hinders their straightforward implementability. This activity aims at advancing in this direction by:

  • Proposing architectures to reduce the computational cost of ML and AI algorithms either by solely hardware design changes or by joint hardware-algorithm co-design. Examples are the use of weights with largely reduced precision in DNNs or minimization of data transfer bringing the computation at the edge of the cloud;
  • Changing the algorithms for processing big data to largely reduce memory requirements and hardware complexity. An example in this direction is the use of suitably modified streaming principal component analysis algorithms.
  • Designing methodologies for neural architecture search (NAS) and co-optimized training and hardware implementation of mixed-precision ML algorithms.
  • Exploiting ML and AI advantages to design efficient and high performance hardware architectures for image and video coding.

ERC sectors 

  • PE6_1 Computer architecture, embedded systems, operating systems
  • PE6_2 Distributed systems, parallel computing, sensor networks, cyber-physical systems
  • PE6_7 Artificial intelligence, intelligent systems, natural language processing
  • PE6_11 Machine learning, statistical data processing and applications using signal processing (e.g. speech, image, video)
  • PE7_4 (Micro- and nano-) systems engineering

Keywords 

  • Neural network hardware
  • Machine learning
  • Artificial intelligence
  • Video coding