SIG on Green AI

 
Chair Kun Wang University of California, Los Angeles
Vice-Chair Sabita Maharjan University of Oslo, Norway
Vice-Chair Mehdi Bennis University of Oulu, Finland
Vice-Chair Neeraj Kumar Thapar Institute of Engineering and Technology
Vice-Chair M. Cenk Gursoy Syracuse University, USA

Scope and Objectives

Since 2012, the field of artificial intelligence (AI) has reported remarkable progress on a broad range of capabilities including object recognition, game playing, machine translation, etc. This progress has been achieved by increasingly large and computationally-intensive deep learning models. In addition, AI technologies are widely used in Internet of Things (AI+IoT), e.g., cloud data centers, edge systems, and network devices to decrease energy consumption adaptively, support the computing power limitation of edge nodes, and provide more productive and energy efficient. As the most advanced deep learning models and cloud data center/edge/terminal devices will require resources to construct, train and operate, reducing the computational cost and energy consumption, for deep learning models and applications become critical issues.

The SIG will focus on issues related to performance efficiency in green AI. In the aspect of deep learning models, the main contributors of computational cost are construction and training. Minimizing computational cost yet guaranteeing accuracy of deep learning models can be achieved by optimizing complexity of space and time. Several techniques can be used to achieve this goal. For example, some acceleration algorithms (e.g., FFT Conv2d, Sparse-block net, etc.), and other optimal tools and libraries can improve the computational efficiency of deep learning models. Moreover, the acceleration effect brought by heterogeneous acceleration chip (e.g., GPU, FPGA, ASIC, etc.) is very intuitive. In addition, techniques such as kernel pruning, distilling, sparsity, quantization, transfer learning, and fine model design are used to deep learning model compression, reducing parameter redundancy, storage occupancy, communication bandwidth, computational complexity, and thus conducive to the application deployment.

Besides, in the aspect of AI applications, cloud data centers are energy-intensive facilities. In addition, edge learning (edge servers and terminal devices) is another main contributor of computational cost. Data centers can produce a significant amount of heat which requires to cool. The sophisticated decision making, network management, resource optimization, and in-depth knowledge discovery need a significant amount of computational resources. AI-based technologies, such as icooling and other learning technologies such as statistical learning, feed-forward neural networks, deep recurrent neural networks can help promote better decision making and contribute to building greener and more sustainable applications.

Topics

  • Heterogeneous computation method for green deep learning models in AI
  • Innovative theory, standards, protocols, and strategies for green deep learning models in AI
  • Energy-efficient cloud systems and large-scale green AI-based applications
  • Green AI-based resource allocation and scheduling for cloud computing
  • Green AI-based parallel computing in mobile edge systems
  • Light weight cryptography, communication, and mobile app for AI-based terminals
  • Trade-off between performance, energy, and other resources in AI-based clouds and edges
  • Energy-efficient computing, communication, control, and storage architectures and protocols for green AI applications
  • Green AI in the convergence of cloud computing, edge computing and terminals.
  • AI-enabled sustainable security solutions for smart applications.