Keynote Speeches


My Last Dance -- Development and Applications of a Memory-Traffic-Efficient Convolutional Neural Network



  • Youn-Long Lin
    • National Tsing Hua University, Taiwan



  • Date: March 11, 2024
  • Time: 9:20 - 10:30 (UTC+8)
  • Room: 801

Abstract:

In this presentation, I will introduce the design of the HarDNet, an efficient and accurate convolutional neural network architecture. The fundamental idea behind its design is to minimize DRAM access, considering the slower speed and higher energy consumption of DRAM compared to fast and cost-effective arithmetic operations. HarDNet architecture has undergone optimization for speed and energy efficiency, making it an ideal choice for various applications, including object detection, semantic segmentation, and medical image segmentation. HarDNet is open-source, allowing anyone to use and modify it. The success of HarDNet has been significant in various fields and countries, such as autonomous driving, industrial automation, vehicle safety, environmental monitoring, colonoscopy polyp segmentation, and MRI imaging.


Biography:

Dr. Youn-Long Lin, a distinguished figure in Computer Science, earned a PhD from the University of Illinois at Urbana-Champaign in 1987. Over 36 years at the National Tsing Hua University, he progressed from Associate Professor to Chair Professor and Department Head, leaving an indelible mark on academia. As Vice President of Research & Development, Dr. Lin steered the institution's research initiatives. He extended his influence globally as a Guest Professor at Waseda University and Peking University.

A leading researcher, Dr. Lin's expertise spans physical design automation, high-level synthesis, video codec architecture, and neural network architecture. Beyond academia, he co-founded and served as CTO of Global Unichip Corp, founded Neuchips Corp, and played a pivotal role in founding the Chip Implementation Center (CIC), fostering collaborative innovation.





Big AI for Small Devices



  • Yiran Chen
    • Duke University, USA




  • Date: March 12, 2024
  • Time: 9:20 - 10:30 (UTC+8)
  • Room: 801

Abstract:

As artificial intelligence (AI) transforms industries, state-of-the-art models have exploded in size and capability. However, deploying them on resource-constrained edge devices remains extremely challenging. Smartphones, wearables, and IoT sensors face tight limits on compute, memory, power, and communication. This gap between demanding AI models and edge hardware capabilities hinders onboard intelligence. In this talk, we will re-examine the techniques to bridge this gap and embed big AI on small devices. First, we will boost single-device efficiency via model compression. We will discuss how the properties of different hardware platforms impact the quantization and pruning strategies of deep neural network (DNN) models, benefiting actual system throughput and memory usage when considering the execution process of the models. Second, we will discuss the designs aimed at reducing the communication, computation, and storage overheads for distributed edge AI systems. We will also delve into the underlying design philosophies and their evolution toward efficient, scalable, robust, and secure edge computing systems.


Biography:

Yiran Chen received B.S. (1998) and M.S. (2001) from Tsinghua University and Ph.D. (2005) from Purdue University. After five years in the industry, he joined the University of Pittsburgh in 2010 as Assistant Professor and was promoted to Associate Professor with tenure in 2014, holding Bicentennial Alumni Faculty Fellow. He is now the John Cocke Distinguished Professor of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena), the NSF Industry-University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of Duke Center for Computational Evolutionary Intelligence (DCEI). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published 1 book and more than 600 technical publications and has been granted 96 US patents. He has served as the associate editor of more than a dozen international academic periodicals and served on the technical and organization committees of more than 70 international conferences. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He received 11 Ten Year Retrospective Influential Paper Award, Outstanding Paper Award, Best Paper Awards, and Best Student Paper Awards, 2 best poster awards, and 15 best paper nominations from international journals, conferences, workshops. He received numerous awards for his technical contributions and professional services such as the IEEE CASS Charles A. Desoer Technical Achievement Award, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, etc. He has been the distinguished lecturer of IEEE CEDA and CAS. He is a Fellow of the AAAS, ACM, and IEEE, and now serves as the chair of ACM SIGDA.






Last Modified: December 13, 2023