Keynote speakers

Keynote speakers

We are pleased to present the ISPDC 2020 keynote speakers:

Wolfgang Dreyer
Wolfgang Dreyer

Sn Principal Product Manager, HPC
Focus Automotive, Aerospace, O&G
Wolfgang.dreyer [at] oracle com

HPC on Oracle Cloud Infra­struc­ture: Expe­rience on premise–level per­for­­mance and control with Oracle Cloud Infrastructure

Abstract: Research Community and commercial Organizations continue to move business applications and databases to the cloud to reduce the cost of purchasing, updating, and maintaining on-premise hardware and software. However, most high-performance computing (HPC) workloads remain on premise—mainly because these jobs have specialized needs that traditional cloud offerings can’t handle. And yet, running HPC jobs in the cloud makes a lot of sense.

Oracle Cloud Infrastructure has been architected in an optimum way for high- performance business or and specially for technical workload. OCI offers exceptional performance, security, and control for today’s most demanding HPC workloads helping our customers solve complex problems faster. In this talk we will refer to major European research projects which have already carried out their workload on compute and AI resources in OCI.

Bio

Wolfgang Dreyer has dedicated his career since 1991 to HPC solutions. His career started in Hardware business where he delivered early Top500 Systems around the globe. He was involved in HPC-Projects for Academic but also in the industrial business for Automotive, Life-Science, Aerospace and Movie Industry.
He understood early that the success and scaling of HPC solutions depend on new software algorithm and effort needs to be put in for codes to make them aware of new hardware capabilities. With Allinea being in the Software business for Optimizer- and Debugger- tools gave him the background view on the need of software improvements in the HPC market.
By joining Microsoft for the HPC business in 2006 Wolfgang supported first attempts on in the transition from on-premise HPC to HPC in the cloud. Since 2012 for Adaptive Computing and Rescale Wolfgang covered two positions dedicated for scaling and cloud computing. He worked with Enterprises, Academic and Startup Companies on solutions towards Hybrid and IaaS or SaaS and the customers transition towards Cloud solutions. Since 2018 Wolfgang is bringing this experience to ORACLE Cloud Infrastructure.

Alexey Lastovetsky
Alexey Lastovetsky

University College Dublin (UCD)
alexey.lastovetsky [at] ucd ie

Optimal Matrix Partitioning for Data Parallel Computing on Hybrid Hete­ro­gene­ous Platforms

Abstract: In this talk, we study the problem of partitioning a matrix over a small number of heterogeneous devices, which is crucial for data parallel dense linear algebra and other applications with similar communication patterns on modern hybrid heterogeneous compute nodes. The objective is to balance the load of heterogeneous devices while minimising the communication cost. While the problem has been solved for the case of two processors, it is still open for three and more processors. The state-of-the-art solution for the case of three processors uses a communication cost function, which does not accurately account for the total amount of data moved between processors and therefore leaves the question of its global optimality open.

In the presented work, we propose a cost function, which accurately represents the total amount of data moved between processors. Then, we formulate and solve the problem of optimal partitioning of a square computational domain, using this accurate communication cost function. Finally, we propose and implement an original experimental methodology for accurate measurement of the communication time of parallel applications on hybrid heterogeneous servers, integrating multi-core CPUs and various accelerators. We apply this methodology to experimental validation of our mathematical result.

Bio

Alexey Lastovetsky holds a PhD degree from the Moscow Aviation Institute and a Doctor of Science (Habilitation) degree from the Russian Academy of Sciences. His main research interests include algorithms, models, and programming tools for high performance heterogeneous computing. He has published over 175 technical papers in refereed journals, edited books, and international conferences, and released over a dozen software tools under the GPL license. He authored the monographs “Parallel computing on heterogeneous networks” (Wiley, 2003) and “High performance heterogeneous computing” (with J. Dongarra, Wiley, 2009). He has supervised to completion 20 PhD students. He helped organise in various capacities over 300 international conferences. He has won over a dozen individual research grants, including three prestigious Science Foundation Ireland Investigator awards, of a total value of more than 3.5 million euro. He is currently Associate Professor in the School of Computer Science at University College Dublin (UCD). At UCD, he is also the founding Director of the Heterogeneous Computing Laboratory (http://hcl.ucd.ie/).

Giuseppe Lipari
Giuseppe Lipari

University Lille I
giuseppe.lipari [at] univ-lille fr

Resource reservations for hard and soft real-time applications

Abstract: In this talk, I will discuss the problem of scheduling real-time applications on open execution platforms and operating systems like Linux.

First, I will give a brief introduction to the area of real-time scheduling and analysis. Traditionally, real-time scheduling was restricted to safety critical applications (avionics, aerospace, nuclear plants, etc.), where off-line analysis techniques and static design patterns are used to ensure that every software function is executed within precise time bounds. However, hardware platforms are becoming increasingly complex and difficult to analyse. At the same time, we observe an increasing need for the integration of functionalities with different levels of criticality on the same hardware platform. As a consequence, software developers are confronted with the many sources of unpredictability of modern hardware and software platforms, which makes it difficult to analyse the temporal behaviour of modern applications.

I will then present the resource reservation framework, a set of techniques for implementing timing isolation in operating systems. One implementation of these techniques is now available in the Linux kernel with the SCHED_DEADLINE scheduler. Finally, I will present examples of the usage of this scheduler on some hard and soft real-time applications. I will conclude with an overview of recent developments in real-time scheduling for heterogeneous hardware platforms, like GPUs and other accelerators.

Bio

Giuseppe Lipari received his master’s degree in computer engineering from the University of Pisa, Italy, in 1996, and his PhD degree in computer engineering from the Scuola Superiore Sant’Anna of Pisa, Italy, in 2000. He has been Associate Professor at the Scuola Superiore Sant’Anna from 2004 to 2014. Since 2014 he is Full Professor of Computer Science at University of Lille, France. He is member of the Embedded Real-Time Adaptative system Design and Execution (Émeraude) team of the “Centre de Recherche en Informatique, Signal et Automatique” (CRIStAL) of Lille.
He has been awarded the grade of IEEE fellow “for contributions to reservation-based real-time scheduling”. His research interests include real-time systems, real-time operating systems, scheduling algorithms, embedded systems, wireless sensor networks, static analysis of programs.

Mitsuhisa Sato
Mitsuhisa Sato

Team Leader of Architecture
Development Team
Deputy project leader,
FLAGSHIP 2020 project
Deputy Director, RIKEN Center for
Computational Science (R-CCS)
msato [at] riken jp

The Supercomputer “Fugaku” and Arm-SVE enabled A64FX processor for energy-efficiency and sustained app­lica­tion perfor­mance

Abstract: We have been carrying out the FLAGSHIP 2020 to develop the Japanese next-generation flagship supercomputer, Post-K, named as “Fugaku” recently. In the project, we have designed a new Arm-SVE enabled processor, called A64FX, as well as the system including interconnect with the industry partner, Fujitsu. The processor is designed for energy-efficiency and sustained application performance. In the design of the system, the “co-design” with the system and applications is a key to make it efficient and high-performance. We analyzed a set of the target applications provided from applications teams for the design of the processor architecture and the decision of many architectural parameters. The “Fugaku” is being installed and scheduled to be put into operation for public service around 2021. In this talk, several features and some preliminary performance results of the “Fugaku” system and A64FX manycore processor will be presented as well as the overview of the system.

Bio

Mitsuhisa Sato received the M.S. degree and the Ph.D. degree in information science from the University of Tokyo in 1984 and 1990. From 2001, he was a professor of Graduate School of Systems and Information Engineering, University of Tsukuba. He has been working as a director of Center for computational sciences, University of Tsukuba from 2007 to 2013. Since October 2010, he is appointed to the research team leader of programming environment research team in Advanced Institute of Computational Science (AICS), renamed to R-CCS, RIKEN. Since 2014, he is working as a team leader of architecture development team in FLAGSHIP 2020 project to develop Japanese flagship supercomputer “Fugaku” in RIKEN. Since 2018, he is appointed to a deputy Director of RIKEN Center for Computational Science. He is a Professor (Cooperative Graduate School Program) and Professor Emeritus of University of Tsukuba.

Michael S. Woodacre
Michael S. Woodacre

HPE Fellow/VP, CTO HPC/MCS,
Hewlett Packard Enterprise
woodacre [at] hpe com

The Memory-Driven Compu­ting Jour­ney

Abstract: This talk will cover the use of memory technology within computing platforms, from building large memory systems, to use in neuromorphic computing. What use cases can benefit from novel use of Memory-Driven Computing techniques. How do the latest industry moves creating open memory fabrics (including Gen-Z and Compute Express Link) impact system design? The use of high bandwidth memories and non-volatile memories – where do these technologies play relative to each other? How can they impact the way we build systems to deal with the challenge of processing and gaining knowledge/insights from all the data we are collecting at exponentially growing rates.

Bio

Mike is a HPE Fellow/VP, and CTO for the HPC and Mission Critical Systems (MCS) Business Unit at HPE. Mike is also the platform architect for the HPE Superdome Flex. Mike joined HPE with the SGI acquisition in 2016, after 26 years at SGI where he was Chief Engineer for scalable systems. Mike is leading the MCS business unit focus on Memory-Driven Computing collaboration across groups at HPE including Hewlett Packard Labs, HPE-IT, and Pointnext services. His interests include high performance processor and interconnect design, memory/storage technologies, and design of cache coherent protocols for ccNUMA systems. Previous projects at HPE/SGI include architecture and design of the Superdome Flex, MC990-X/UV300, UV2000, UV1000, Altix 4700/Altix 3000, Origin3000, and Origin2000 system families. Mike also worked on microprocessor design at MIPS Computer Systems (R4000), and INMOS (Transputer). Mike has a B.Sc in Computer Systems Engineering, The University of Kent, Canterbury, UK. Mike has been granted multiple US patents in the field of computer system architecture.