Пленарные доклады

Понедельник, 28 сентября, 9:00-10:40
Зал «Сокольники»

Открытие конференции
Top50, Top500 and the Supercomputer World

Vladimir Voevodin, Moscow State University

Current Trends in High Performance Computing [PDF]
Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, USA

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software.

Развитие отечественных суперкомпьютерных технологий в РФЯЦ-ВНИИЭФ
В.П. Соловьев, Р.М. Шагалиев, А.Н. Гребенников, РФЯЦ-ВНИИЭФ

The Inevitable End of Moore’s Law beyond Exascale will Result in Data and HPC Convergence
Satoshi Matsuoka, Tokyo Institute of Technology, Japan

The so-called “Moore’s Law”, by which the performance of the processors will increase exponentially by factor of 4 every 3 years or so, is slated to be ending in 10-15 year timeframe due to the lithography of VLSIs reaching its limits around that time, and combined with other physical factors. This is largely due to the transistor power becoming largely constant, and as a result, means to sustain continuous performance increase must be sought otherwise than increasing the clock rate or the number of floating point units in the chips, i.e., increase in the FLOPS. The promising new parameter in place of the transistor count is the perceived increase in the capacity and bandwidth of storage, driven by device, architectural, as well as packaging innovations: DRAM-alternative Non-Volatile Memory (NVM) devices, 3-D memory and logic stacking evolving from VIAs to direct silicone stacking, as well as next-generation terabit optics and networks. The overall effect of this is that, the trend to increase the computational intensity as advocated today will no longer result in performance increase, but rather, exploiting the memory and bandwidth capacities will instead be the right methodology. However, such shift in compute-vs-data tradeoffs would not exactly be return to the old vector days, since other physical factors such as latency will not change. As such, performance modeling to account for the evolution of such fundamental architectural change in the post-Moore era would become important, as it could lead to disruptive alterations on how the computing system, both hardware and software, would be evolving towards the future.


Понедельник, 28 сентября, 11:10-13:00
Зал «Сокольники»

Не флопсом единым... О памяти в суперкомпьютерах и развитии ее технологий [PDF]
Андрей Слепухин, Т-Платформы

Лидирующие решения РСК для HPC и ЦОД
Алексей Шмелев, РСК

Архитектура и технологии Intel для высокопроизводительных вычислений. Стратегия и тактика создания решений
Николай Местер, Intel

Суперкомпьютеры НР от мала до велика
Вячеслав Елагин, Hewlett-Packard

Overview of Tianhe2 System and Applications [PDF]
Yutong Lu, National University of Defense Technology, China

The peak performance of a supercomputer, such as Tianhe-2, is increased dramatically by using the latest heterogeneous architecture and high-speed interconnect. This talk will discuss how we design and support the scalability-centric system and applications running on Tianhe-2. The prospect of the next generation of Tianhe system will also be given.


Вторник, 29 сентября, 9:00-10:40
Зал «Сокольники»

Российские микропроцессоры архитектурной линии Эльбрус для серверов и суперкомпьютеров [PDF]
А.К. Ким, И.Н. Бычков, В.Ю. Волконский, Ф.А. Груздов, С.В. Семенихин, В.В. Тихорский, В.М. Фельдман, АО «МЦСТ»

Highly-Productive HPC on Modern Vector Supercomputers: Present and Future [PDF]
Hiroaki Kobayashi, Tohoku University, Japan

In this talk, we will present our HPC activities at Cyberscience Center, Tohoku University. We have been running large-scale vector-supercomputers for more than 30 years, and developing their highly-productive applications in the fields of science and engineering, not only for academia but also for industry. We will also talk about our recent research work on architecture design of future vector-parallel systems.

Do You Know What Your I/O Is Doing? [PDF]
William Gropp, University of Illinois Urbana-Champaign, USA

Even though supercomputers are typically described in terms of their floating point performance, science applications also need significant I/O performance for all parts of the science workflow.  This ranges from reading input data, to writing simulation output, to conducting analysis across years of simulation data.  This talk presents recent data on the use of I/O at several supercomputing centers and what that suggests about the challenges and open problems in I/O on HPC systems.

Advances in HPX Runtime Implementation and Application [PDF]
Thomas Sterling, Indiana University, USA

Significant progress has been made by the HPC community in advancing the state of the art of runtime system software support for significant improvements in operational efficiency and scalability, at least for classes of dynamic adaptive computational application algorithms. This presentation will describe and discuss recent results that expose both opportunities and challenges using the current release of the HPX-5 version 1.2 runtime system software. Included will be working examples of applications.


Вторник, 29 сентября, 11:10-13:00
Зал «Сокольники»

GPU NVIDIA для вычислений и визуализации [PDF]
Антон Джораев, NVIDIA

Paving the Road to Exascale
Michael Kagan, Mellanox

Современные платформы DELL для HPC
Михаил Орленко, Dell

Инновационные решения HUAWEI для HPC
Константин Нахимовский, Huawei

Why Exascale will not Appear Without Runtime Aware Architectures? [PDF]
Mateo Valero Cortés, Barcelona Supercomputing Center, Spain

In the last years the traditional ways to keep the increase of hardware performance to the rate predicted by the Moore's Law vanished. When uni-cores were the norm, hardware design was decoupled from the software stack thanks to a well defined Instruction Set Architecture (ISA). This simple interface allowed developing applications without worrying too much about the underlying hardware, while computer architects proposed techniques to aggressively exploit Instruction-Level Parallelism (ILP) in superscalar processors. Current multi-cores are designed as simple symmetric multiprocessors on a chip. While these designs are able to compensate the clock frequency stagnation, they face multiple problems in terms of power consumption, programmability, resilience or memory. The solution is to give more responsibility to the runtime system and to let it tightly collaborate with the hardware. The runtime has to drive the design of future multi-cores architectures. In this talk, we introduce an approach towards a Runtime-Aware Architecture (RAA), a massively parallel architecture designed from the runtime's perspective.

Uncertainty in Clouds: Challenges of Efficient Resource Provisioning
Andrei Tchernykh, Uwe Schwiegelsohn, Vassil Alexandrov and El-Ghazali Talbi

Clouds differ from previous computing environments in the way that they introduce a continuous uncertainty into the computational process. The uncertainty becomes the main hassle of cloud computing bringing additional challenges to both end-users and resource providers. It requires to waive habitual computing paradigms, adapt current computing models, and design novel resource management strategies to handle uncertainty in an effective way.
In spite of extensive research of uncertainty issues in computational biology, deci-sion making in economics, etc. a study of uncertainty for cloud computing is limited. Most of works examine uncertainty phenomena in users’ perceptions of the qualities, intentions and actions of providers, privacy, security and availability.
We discuss several major sources of uncertainty in clouds: dynamic elasticity, dy-namic performance changing, virtualization, loosely coupling application to the infrastruc-ture, among many others. It is impossible to get exact knowledge about the system. Pa-rameters such as an effective processor speed, number of available processors, and actual bandwidth are changing over the time. Elastic escalation process has a higher repercussion on the QoS, but adds another factor of uncertainty.
The manner in which the service provisioning can be done depends not only on the service property and needed resources, but also users that share resources at the same time, in contrast to dedicated resources governed by a queuing system.