Keynotes
Talk: In Situ Data Analytics for Next Generation Molecular Dynamics Workflows
Dr. Michela Taufer, University of Tennessee Knoxville
Abstract: Molecular dynamics (MD) simulations study important phenomena in chemistry, materials science, molecular biology, and drug design. They are also one of the most common simulations on petascale and, it is likely they will be equally common on exascale machines as those machines become more widely available. Next-generation supercomputers will have dramatically higher performance than current systems, generating more data that needs to be analyzed (i.e., in terms of number and length of molecular dynamics trajectories). The coordination of data generation and analysis cannot rely on manual, centralized approaches as it does now.
This talk presents an interdisciplinary approach to tackle the data challenges of MD simulations. Through the creation of novel data analytics algorithms for in situ data analysis of relevant structural molecular properties, the definition of MD-based machine learning (ML) techniques to automatically identify the molecular domains where the properties reside at runtime, and the integration of both algorithms and techniques into MD workflows at the extreme scale, we revolutionize data generation and analysis. By harnessing knowledge from MD simulations in situ, we transform MD workflows on next-generation supercomputers, enabling the workflows to steer MD simulations to more promising areas of the simulation space, identify the data that should be written to disk in underprovisioned parallel file systems, and index data for retrieval and postsimulation analysis.
Bio: Michela Taufer is an ACM Distinguished Scientist and holds the Jack Dongarra Professorship in High Performance Computing in the Department of Electrical Engineering and Computer Science at the University of Tennessee Knoxville (UTK). She earned her undergraduate degrees in Computer Engineering from the University of Padova (Italy) and her doctoral degree in Computer Science from the Swiss Federal Institute of Technology or ETH (Switzerland). From 2003 to 2004 she was a La Jolla Interfaces in Science Training Program (LJIS) Postdoctoral Fellow at the University of California San Diego (UCSD) and The Scripps Research Institute (TSRI), where she worked on interdisciplinary projects in computer systems and computational chemistry. Taufer has a long history of interdisciplinary work with scientists. Her research interests include software applications and their advanced programmability in heterogeneous computing (i.e., multi-core platforms and GPUs); cloud computing and volunteer computing; and performance analysis, modeling and optimization of multi-scale applications. She has been serving as the principal investigator of several NSF collaborative projects. She also has significant experience in mentoring a diverse population of students on interdisciplinary research. Taufer’s training expertise includes efforts to spread high-performance computing participation in undergraduate education and research as well as efforts to increase the interest and participation of diverse populations in interdisciplinary studies.
Talk: Cache-Aware Roofline Model: Performance, Power and Energy Efficiency
Dr. Leonel Souza, INESC Lisboa, Instituto Superior Técnico, Universidade de Lisboa
Abstract: In this talk, we will introduce the Cache-aware Roofline Model (CARM) and expose its basic principles when modelling the performance upper-bounds of a processor. We will also discuss our recent research contributions in extending the model insightfulness with application-driven CARM, as well as applying the CARM principles to model power consumption and energy-efficiency upper-bounds. We will show how the Intel®️ Advisor relies on the CARM implementation and how it can be used to detect execution bottlenecks and provide useful hints on which type of optimizations to apply in order to fully exploit device capabilities.
Bio: Leonel Sousa is Professor (“Professor Catedrático”) of the Electrical and Computer Engineering Department (DEEC) of Instituto Superior Técnico (IST), Universidade de Lisboa, in Portugal, and a Senior Researcher of INESC-ID, a non-profit research institute affiliated with IST. His research interests include high performance and parallel computing, and architectures for general purpose and specialized processors. He is Fellow of the IET-FIET (2013), Distinguished Scientist of ACM (2015), Senior Member of IEEE (2004), and Member of IFIP WG10.3 on concurrent systems.
Talk: HPC Cloud: challenges, opportunities, and efforts on bringing HPC to the masses
Dr. Marco Netto, IBM Research Brazil Lab
Abstract: HPC workloads have drastically evolved over the last years by embracing applications from big data and AI. An interesting fact is that issues on resource allocation from user perspective are still here. Users struggle to size resources for their jobs and have poor experience estimating when those will get completed. With HPC cloud, a new scenario has emerged, not only due to the new AI workloads, but also due all cloud technologies and capabilities such as containers, kubernetes, serverless computing, and elasticity available for resource management. Such a shift in this area requires rethinking how resource demanding applications are managed and consumed, including also the economics that can make HPC cloud financially sustainable for both users and cloud providers. This talk covers challenges and opportunities around HPC Cloud and some projects on simplifying the use of HPC to end users.
Bio: Marco Netto is a Research Manager of the Intelligent Cloud Technologies group at IBM Research Brazil and an IBM Master Inventor. He has more than 20 years of experience on resource management for distributed systems. He leads multidisciplinary teams in a very dynamic environment to create novel technologies mixing Cloud, Artificial Intelligence, and High Performance Computing. Co-inventor of 100+ patent applications (60+ granted) on several areas including: Cloud, AI, HPC, Data Analytics, IoT, UX, and DevOps. In IBM he has contributed to projects on workload migration to the cloud, autoscaling for big data applications, and digital agriculture. His current efforts are around simplifying the use of HPC resources via Jupyter notebooks for Deep Learning model training users. Marco has published more than 40 scientific publications and is an active reviewer of several top scientific journals including ACM Computing Surveys, IEEE Transactions on Parallel and Distributed Systems, and Journal of Parallel and Distributed Computing. He obtained his Ph.D. in Computer Science at the University of Melbourne, Australia, in 2010. Marco is an IEEE senior member and IBM Academic of Technology member.
Talk: Cache Efficient Computing
Dr. Sartaj Sahni, University of Florida, Gainesville
Abstract: Although data caches were introduced in the mid-sixties to hide the growing gap between processor and memory speeds, few algorithm designers account for the presence of caches in modern computers. The focus of algorithm design remains operation counts while largely ignoring memory accesses. This talk will explore the impact that caches have on the performance of applications. We shall demonstrate the effectiveness of reorganizing computations so as to reduce the number of cache misses thereby reducing the number of memory accesses. Applications such as Nussinov’s RNA folding and the Value Iteration method for reinforcement learning will be used for illustrative purposes.
Bio: Sartaj Sahni is a Distinguished Professor of Computer and Information Sciences and Engineering at the University of Florida. He is also a member of the European Academy of Sciences, a Fellow of IEEE, ACM, AAAS, and Minnesota Supercomputer Institute, and a Distinguished Alumnus of the Indian Institute of Technology, Kanpur. In 1997, he was awarded the IEEE Computer Society Taylor L. Booth Education Award “for contributions to Computer Science and Engineering education in the areas of data structures, algorithms, and parallel algorithms”, and in 2003, he was awarded the IEEE Computer Society W. Wallace McDowell Award “for contributions to the theory of NP-hard and NP-complete problems”. Dr. Sahni was awarded the 2003 ACM Karl Karlstrom Outstanding Educator Award for “outstanding contributions to computing education through inspired teaching, development of courses and curricula for distance education, contributions to professional societies, and authoring significant textbooks in several areas including discrete mathematics, data structures, algorithms, and parallel and distributed computing.” Dr. Sahni has published over three hundred research papers and written 15 texts. His research publications are on the design and analysis of efficient algorithms, parallel computing, interconnection networks, design automation, and medical algorithms. He is a past Editor-in-Chief of ACM Computing Surveys.
Talk: Managing Resource in Edge Ecosystems: Challenges and Open Issues
Dr. Albert Zomaya, University of Sydney, Sydney
Abstract: Recent technological trends such as Industry 4.0 introduced new challenges that push the limit of current computer and networking architectures. It demands the connection of thousands, if not millions, of sensors and mobile devices coupled with optimized operations to automate various operations inside factories. This led to the new era of Internet of Things (IoTs) where lightweight (possibly mobile) devices are envisaged to send vital information to cloud data centres (mobile and fixed infrastructure) for further processing and decision making.
Current cloud computing systems, however, are not able to efficiently digest and process collected information from IoT devices with strict response requests for two main reasons: (1) the round trip delay between IoT devices to the processing engines of cloud could exceed an application’s threshold, and (2) network links to cloud resources could be clogged when IoT devices flush data in an uncoordinated fashion. Fog and Edge Computing are two solutions to address both of the previous problems. Though designed to alleviate the same problem, they have fundamental differences that make adopting one more applicable than the other.
This talk will overview the practical concerns of exploiting Edge Computing to realize today’s IoT implementations through tackling the most important obstacles that hinder their adoption. First, production of applicable network (fixed and mobile) latency models to capture all elements of IoT platforms. Second, building a holistic Edge ecosystem to orchestrate various inter-related layers of IoT platforms, including connectivity, big-data analytics, and workload optimization. Third, proposing viable solutions that can actually be implemented in IoT-based applications, such as, vehicular networks, preventative maintenance, health, energy, to name a few.
Bio: Albert Y. ZOMAYA is Chair Professor of High-Performance Computing & Networking in the School of Computer Science and Director of the Centre for Distributed and High-Performance Computing at the University of Sydney. To date, he has published > 600 scientific papers and articles and is (co-)author/editor of >30 books. A sought-after speaker, he has delivered >250 keynote addresses, invited seminars, and media briefings. His research interests span several areas in parallel and distributed computing and complex systems. He is currently the Editor in Chief of the ACM Computing Surveys and served in the past as Editor in Chief of the IEEE Transactions on Computers (2010-2014) and the Founding Editor in Chief of the IEEE Transactions on Sustainable Computing (2016-2020).
Professor Zomaya is a decorated scholar with numerous accolades including Fellowship of the IEEE, the American Association for the Advancement of Science, and the Institution of Engineering and Technology (UK). Also, he is an Elected Fellow of the Royal Society of New South Wales and an Elected Foreign Member of Academia Europaea. He is the recipient of the1997 Edgeworth David Medal from the Royal Society of New South Wales for outstanding contributions to Australian Science, the IEEE Technical Committee on Parallel Processing Outstanding Service Award (2011), IEEE Technical Committee on Scalable Computing Medal for Excellence in Scalable Computing (2011), IEEE Computer Society Technical Achievement Award (2014), ACM MSWIM Reginald A. Fessenden Award (2017), and the New South Wales Premier’s Prize of Excellence in Engineering and Information and Communications Technology (2019).
Talk: Zissou: Novel Datacenter, Server, and Software Designs using Immersion Cooling.
Dr. Ricardo Bianchini, Microsoft Research
Abstract: Chip power has been steadily increasing with the end of Denard scaling, requiring ever larger cooling infrastructures and server footprints. Motivated by this trend, in project Zissou, we are exploring the use of 2-phase immersion cooling for hyperscale public clouds. Zissou will drastically improve IT cooling capability, unlocking innovation across datacenter, server, and software stacks. For example, by alleviating thermal constraints, Zissou will enable us to densely pack components and servers, improve the performance of disaggregated-resource architectures, and aggressively overclock components. We also expect that it will lower component failure rates, due to its lower and stable operating temperatures. Zissou opens up many interesting research avenues, such managing the tradeoff between performance, power, and reliability in component overclocking. In this talk, I will introduce Zissou, overview the main areas we are exploring, and discuss some of our initial results on component overclocking. I will conclude with a call to action on devising novel hardware and software for an environment where thermal constraints are relaxed.
Bio: Dr. Ricardo Bianchini is a Distinguished Engineer at Microsoft, where he leads efforts to improve the efficiency and sustainability of the company’s online services and datacenters. He also manages the Systems Research Group at Microsoft Research in Redmond. His main research interests include cloud computing, datacenter efficiency, and leveraging machine learning to improve systems. He has published nine award papers and received the CAREER award from the National Science Foundation. He has given several conference keynote talks and served on numerous program committees, including as Program Co Chair of ASPLOS’18, EuroSys’17, and ICDCS’16. He is an ACM Fellow and an IEEE Fellow.
Talk: Flex.
Dr. Leandro Marzulo and Dr. Mauricio Pilla, Google, Inc
Abstract: Ensuring that users in a computation cluster can receive strong guarantees about immediate resource availability is surprisingly hard to do without wasting significant amounts of resources. The Google Flex system gives administrators a way to define resource pools that provide a range of strong, statistically-backed guarantees based on user and job behavior, while also reducing human effort by automation tied into the Borg cluster manager, Colossus distributed file system, and many other services. Flex has been widely adopted at Google, and has produced significant resource savings.
Bio:
Leandro A. J. Marzulo is a Senior Software Engineer at Google, currently working on the Flex team. He has a D.Sc. and a M.Sc. degree from Universidade Federal do Rio de Janeiro (COPPE/UFRJ) and a B.Sc. from Universidade do Estado do Rio de Janeiro (UERJ). Prior to joining Google (in 2018), he was an Associate Professor at Universidade do Estado do Rio de Janeiro (UERJ), from 2012 to 2018, and a Visiting Scholar at University of Massachusetts Amherst (in 2018). His research interests include dataflow computing, parallel and distributed systems and programming and computer architecture.
Mauricio L. Pilla has been a Senior Software Engineer at Google since 2019, and currently is part of the Borg team and the ‘Connect with a Googler’ program. He earned his B.Sc. and his D.Sc. degrees from the Federal University of Rio Grande do Sul (Brazil). His previous appointments included being an Associate Professor at the Federal University of Pelotas from 2008 to 2019, and an Adjunct Professor at the Catholic University of Pelotas from 2005 to 2008. His current interests include parallel and distributed systems, computer architectures, quantum computing simulation, and scheduling, among others.
Talk: Democratizing the use of supercomputers through scientific portals and Quantum Computing
Dr. Genaro Costa, Atos
Abstract: The supercomputers are now new, we had the Top500 for more than 28 years right now. Most of these machines are installed in research centers, attending a broad range of applications, including the rising demands of new AI workloads. These HPC systems serve different domains and require significant effort from both researchers to map the problem in a way to be solved by these machines. With the new technologies, the domain needs help on application development to get the best performance for their applications. From the Quantum Computing point of view, that help is even more important. This talk will present what we had done in Brazil to push HPC, how we are helping these HPC centers to easier the access process to their machines, and how we are handling the adoption of Quantum Computing.
Bio: Genaro Costa works is an HPC Distinguished Expert at Atos Bull and manages the Atos R&D Labs at SENAI-CIMATEC center. He was Adjunct Professor at UFBA, coordinator of the Interdisciplinary Bachelor in Science and Technology course, and vice-director of the Institute of Humanities Arts and Sciences Professor Milton Santos. He received his PhD in Informatics from Universitat Autónoma de Barcelona (UAB). He managed several innovation projects through a partnership between academia and industry. His research interest is in High-Performance Computing, Machine Learning, Big Data, Performance Models, Prescriptive Analytics, and Quantum Computing.