Keynotes

Every day of the Summer School will open with a plenary keynote talk by world renowned speakers

The Speakers

 

Keynote #1

Random thoughts after 60 years in the trenches

Yale Patt

Location: ETF E1
Date: 3th June 2024

Yale Patt

After 60 years of teaching, I have acquired more than a few thoughts about what we do in computer architecture. Was computer architecture really dead for about 30 years, ...until recently, as some would have you believe? Is research in computer architecture important, or is it Scholarship we should be after in the university? When will useful Quantum computers ship?

Is Machine Learning missing a most important piece? Must research results have a serious baseline? Why does better branch prediction accuracy not usually come with similar IPC improvement? Does economics get in the way of solving some very important problems? My purpose in this talk to share my thoughts about some of these items. I will also give you my thoughts about what the microprocessor will look like ten years from now. ... and offer a few suggestions about your time here this week at ETH.

Yale Patt is a teacher at The University of Texas at Austin where he continues to enjoy teaching, research, and consulting more than 60 years after beginning his journey into computer technology. He earned obligatory degrees from reputable universities and has received more than enough awards for his research and teaching. For those who want it, more detail is on his home page: external pageusers.ece.utexas.edu/~patt.


 

Keynote #2

Open RISC-V Platforms in the era Embodied Foundation Models

Luca Benini

Location: ETF E1
Date: 4th June 2024

Luca Benini

The AI revolution is accelerating from its perception-focused origins to the generative era, where foundation models, trained on trillions of mostly unlabeled samples via self-supervision, produce multi-modal outputs such as text, sound, images. Foundation models are poised to disrupt multiple businesses. However, even more fundamental disruption will come when we will be able to "embody" these models in cars, robots, eyeglasses... To achieve this goal we need to tackle major challenges in energy efficiency, safety, security and real-time predictability of fine-tuned foundation models, while curtailing their sheer computational complexity. In this talk I will focus on designing hardware and systems for embodied AI, moving from perceptive to generative models and leveraging an open-platform approach, based on RISC-V processors and accelerators, to ensure long term sustainability, safety and security.

Prof. Dr. Luca Benini holds the chair of Digital Circuits and Systems at ETHZ and is Full Professor at the Universita di Bologna. He received a PhD from Stanford University. Prof. Dr. Benini's research interests are in energy-​​efficient parallel computing systems, smart sensing micro-​​systems and machine learning hardware. He has published more than 1000 peer-​​reviewed papers and five books. Prof. Dr. Benini has won numerous awards, including the 2016 IEEE CAS Mac Van Valkenburg award, the 2019 IEEE TCAD Donald O. Pederson Best Paper Award, and the ACM/IEEE A. Richard Newton Award in 2020. He is an ERC Advanced Grant winner, a Fellow of both the IEEE and the ACM, and is a member of the Acadamia Europea.


 

Keynote #3

From Software Programs to Digital Circuits

Lara Josipovic

Location: ETF E1
Date: 5th June 2024

Lara Josipovic

High-Level Synthesis (HLS) tools enable programmers to automatically generate hardware designs from high-level software abstractions instead of writing tedious and time-consuming low-level hardware descriptions. However, today’s HLS tools are still accessible only to expert users and for particular classes of applications; generating good-quality circuits still requires peculiar code restructuring and extensive experimentation with the tools. In this talk, I will discuss the challenges and limitations of current HLS approaches. I will outline an alternative HLS technique that overcomes these limitations and achieves high parallelism in general-purpose software applications. Finally, I will share my vision of future advancements in HLS and discuss the role of HLS in designing next-generation hardware applications.

Prof. Dr. Lana Josipović is an assistant professor in the Department of Information Technology and Electrical Engineering at ETH Zurich, where she leads the Digital Systems and Design Automation Group (https://dynamo.ethz.ch/). Her research aims to enable various programmers to benefit from digital hardware acceleration and explores synergies across compilers, programming languages, digital hardware design, and computer architecture.


 

Keynote #4

Memory-​​Centric Computing

Onur Mutlu

Location: ETF E1
Date: 6th June 2024

Onur Mutlu

Computing is bottlenecked by data. Large amounts of application data overwhelm storage capability, communication capability, and computation capability of the modern machines we design today. As a result, many key applications' performance, efficiency, and scalability are bottlenecked by data movement. In this talk, we describe three major shortcomings of modern architectures in terms of 1) dealing with data, 2) taking advantage of the vast amounts of data, and 3) exploiting different semantic properties of application data. We argue that an intelligent architecture should be designed to handle data well. We posit that handling data well requires designing architectures based on three key principles: 1) data-centric, 2) data-driven, 3) data-aware. We give several examples for how to exploit each of these principles to design a much more efficient and high performance computing system. We especially discuss recent research that aims to fundamentally reduce memory latency and energy, and practically enable computation close to data, with at least two promising directions: 1) processing using memory, which exploits analog operational properties of memory chips to perform massively-parallel operations in memory, with low-cost changes, 2) processing near memory, which integrates sophisticated additional processing capability in memory controllers, the logic layer of 3D-stacked memory technologies, or memory chips to enable high memory bandwidth and low memory latency to near-memory logic. We show both types of architectures can enable orders of magnitude improvements in performance and energy consumption of many important workloads, such as graph analytics, database systems, machine learning, video processing, climate modeling, genome analysis. We discuss how to enable adoption of such fundamentally more intelligent architectures, which we believe are key to efficiency, performance, and sustainability. We conclude with some research opportunities in and guiding principles for future computing architecture and system designs.

Some related resources are mentioned below.

A 2-page overview paper from DAC 2023:
"Memory-Centric Computing"
external pagehttps://arxiv.org/abs/2305.20000

A short vision paper from DATE 2021:
"Intelligent Architectures for Intelligent Computing Systems"
external pagehttps://arxiv.org/abs/2012.12381

A longer survey of modern memory-centric computing ideas & systems (updated August 2022):
"A Modern Primer on Processing in Memory"
external pagehttps://arxiv.org/abs/2012.03112

Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a Visiting Professor at Stanford University and a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received various honors for his research, including the Persistent Impact Prize of the Non-Volatile Memory Systems Workshop, the Intel Outstanding Researcher Award, the IEEE High Performance Computer Architecture Test of Time Award, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, ACM SIGARCH Maurice Wilkes Award and a healthy number of best paper or “Top Pick” paper recognitions at various computer systems, architecture, and security venues. He is an ACM Fellow, IEEE Fellow, and an elected member of the Academy of Europe. His computer architecture and digital logic design course lectures and materials are freely available on YouTube (external pagehttps://www.youtube.com/OnurMutluLectures), and his research group makes a wide variety of software and hardware artifacts freely available online (https://safari.ethz.ch/). For more information, please see his webpage at https://people.inf.ethz.ch/omutlu/.
 


 

Keynote #5

Accelerating Design Innovation

Andrew Kahng

Location: ETF E1
Date: 7th June 2024

Andrew Kahng

Over the past week, this Summer School has provided an exciting introduction to future directions for hardware systems. And, it has demonstrated how open source can lower barriers and democratize both education and innovation in IC design and computer architectures. This talk will focus on why open-source design automation (i.e., EDA) technology must be considered an essential element – and accelerant – of design innovation. The intrinsic benefits of open source (availability, flexibility, scalability, transparency and reproducibility, cost, low friction, etc.) are well known, and become magnified as regions seek to develop self-sustaining research and innovation ecosystems. Additional value stems from the following. (1) Commercial EDA tools exist only where there are established markets. Hence, open-source EDA must support early “pathfinding” explorations, e.g., of system-technology co-optimizations, 3D/heterogeneous integration, or multiphysics design closure methodologies. (2) Realizing the promise of AI and machine learning across hardware system designs, design methods, and design tools will require open-source EDA to unblock data generation, public sharing of datasets and benchmarks, availability of foundation models, and more. (3) Open source is also the basis for algorithm and optimization innovation toward an “EDA 2.0” that will one day achieve results in less time (multithreading, GPU) and better results in the same time (cloud-native, sampling), likely in concert with AI/ML methods that steer and orchestrate the design process. The talk will conclude with thoughts on how a global community might foster and sustain open-source EDA technology to obtain these benefits.

[Related talks and papers can be found at external pagehttps://vlsicad.ucsd.edu]

Andrew B. Kahng is Distinguished Professor of CSE and ECE and holder of the endowed chair in high-performance computing at UC San Diego. He was visiting scientist at Cadence (1995-97) and founder/CTO at Blaze DFM (2004-06). He is coauthor of 3 books and over 500 journal and conference papers, holds 35 issued U.S. patents, and is a fellow of ACM and IEEE. He was the 2019 Ho-Am Prize laureate in Engineering. He has served as general chair of DAC, ISPD and other conferences, and from 2000-2016 served as international chair/co-chair of the International Technology Roadmap for Semiconductors (ITRS) Design and System Drivers working groups. He was the principal investigator of the U.S. DARPA “OpenROAD” project (external pagehttps://theopenroadproject.org/) from June 2018 to December 2023, and until August 2023 served as principal investigator and director of “TILOS” (external pagehttps://tilos.ai/), a U.S. NSF AI Research Institute.

JavaScript has been disabled in your browser