General IPDPS Info






IEEE Computer Society Technical Committees on Computer Architecture & Distributed Processing

IPDPS 2021 Advance Program

To be held virtually; return here for details.


This page lists all the 18 workshops that are part of the IPDPS 2021 program. Click on the workshop of interest – nine Monday workshops at top of page and nine Friday workshops at bottom – to learn more about the workshop and their planned program of speakers and papers. The link to the platform for attending workshops will be available via the conference virtual platform. The PhD Forum will hold a live event on Monday, with details to follow.

The Main Conference program that follows includes four keynote talks and 22 technical sessions of contributed papers, including the plenary Best Papers session on Wednesday. The technical paper sessions will have 15 minutes live for each paper and will be available on the virtual platform for 30 days following the event. Those papers as well as all of the papers in the workshops will be published in the online proceedings distributed to all registrants prior to the start of the conference. In addition to the live sessions, authors of papers in the main conference have been invited and encouraged to upload a full (25 minute) video presentation of their work that will be available to all registrants for asynchronous viewing. (To authors of papers in the main conference, the instructions for upload will be posted on the Author Resources page.)

The virtual platform for the IPDPS 2021 program will be hosted by the Computer Society. All registered attendees will get a link for connecting to the IPDPS 2021 virtual conference platform.

Authors who have corrections should send email to giving full details.

MONDAY - 17 May 2021




Visit individual
websites at
links shown



Heterogeneity in Computing Workshop



Reconfigurable Architectures Workshop



High Performance Computational Biology



Graphs, Architectures, Programming, and Learning



NSF/TCPP Workshop on Parallel and Distributed Computing Education



High-level Parallel Programming Models and Supportive Environments



Accelerators and Hybrid Emerging Systems



Parallel / Distributed Combinatorics and Optimization



Advances in Parallel and Distributed Computational Models

Monday 17 May

Time to be announced

IPDPS 2021 PhD Forum


2:00 PM PDT

Career Panel

2:50 PM PDT

Lightning Round

4:00–4:30 PM PDT

Breakout Sessions


TUESDAY - 18 May 2021


7:00 AM PDT

Welcome to IPDPS 2021: Conference Organizers

7:30-8:30 AM PDT


Title: A Tale of Two C’s: Convergence and Composability
Session Chair: Karen Karavanic


Ilkay Altintas
San Diego Supercomputer Center

8:30-10:00 AM PDT

Parallel Technical
Sessions 1, 2, 3, & 4

SESSION 1: Performance

Session Chair: Allen D. Malony


Correlation-wise Smoothing: Lightweight Knowledge Extraction for HPC Monitoring Data
Alessio Netti, Daniele Tafani, Michael Ott, and Martin Schulz


Dancing in the Dark: Profiling for Tiered Memory
Jinyoung Choi, Sergey Blagodurov, and Hung-Wei Tseng


Noise-Resilient Empirical Performance Modeling with Deep Neural Networks
Marcus Ritter, Alexander Geiß, Johannes Wehrstein, Alexandru Calotoiu, Thorsten Reimann, Torsten Hoefler, and Felix Wolf


SYMBIOSYS: A Methodology for Performance Analysis of Composable HPC Data Services
Srinivasan Ramesh, Allen D. Malony, Philip Carns, Robert B. Ross, Matthieu Dorier, Jerome Soumagne, and Shane Snyder


Accelerating Distributed-memory Autotuning via Statistical Analysis of Execution Paths
Edward Hutter and Edgar Solomonik



SESSION 2: Linear Algebra

Session Chair: Jiajia Li


Optimizing Memory-Compute Colocation for Irregular Applications on a Migratory Thread Architecture
Thomas B. Rolinger, Christopher D. Krieger, and Alan Sussman


TileSpMV: A Tiled Algorithm for Sparse Matrix-Vector Multiplication on GPUs
Yuyao Niu, Zhengyang Lu, Meichen Dong, Zhou Jin, Weifeng Liu, and Guangming Tan


Leveraging PaRSEC Runtime Support to Tackle Challenging 3D Data-Sparse Matrix Problems
Qinglei Cao, Yu Pei1, Kadir Akbudak, George Bosilca1, Hatem Ltaief, David Keyes, and Jack Dongarra


Communication-Avoiding and Memory-Constrained Sparse Matrix-Matrix Multiplication at Extreme Scale
Md Taufique Hussain, Oguz Selvitopi, Aydin Buluç, and Ariful Azad


Characterizing Small-Scale Matrix Multiplication on ARMv8-based Many-Core Architectures
Weiling Yang, Jianbin Fang, and Dezun Dong



SESSION 3: Scheduling

Session Chair: Ningfang Mi


DAG-based Scheduling with Resource Sharing for Multi-task Applications in a Polyglot GPU Runtime
Alberto Parravicini, Arnaud Delamare, Marco Arnaboldi, and Marco D. Santambrogio


CTXBack: Enabling Low Latency GPU Context Switching via Context Flashback
Zhuoran Ji and Cho-Li Wang


Transparent I/O-Aware GPU Virtualization for Efficient Resource Consolidation
Nelson Mimura Gonzalez and Tonia Elengikal


Demystifying GPU UVM Cost with Deep Runtime and Workload Analysis
Tyler Allen and Rong Ge


DUET: A Compiler-Runtime Subgraph Scheduling Approach for Tensor Programs on a Coupled CPU-GPU Architecture
Minjia Zhang, Zehua Hu, and Mingqin Li



SESSION 4: Architecture 1

Session Chair: Shuaiwen Leon Song


CAGC: A Content-aware Garbage Collection Scheme for Ultra-Low Latency Flash-based SSDs
Suzhen Wu, Chunfeng Du, Haijun Li, Hong Jiang, Zhirong Shen, and Bo Mao


NVMe-CR: A Scalable Ephemeral Storage Runtime for Checkpoint/Restart with NVMe-over-Fabrics
Shashank Gugnani, Tianxi Li, and Xiaoyi Lu


Virtual-Link: A Scalable Multi-Producer, Multi-Consumer Message Queue Architecture for Cross-Core Communication
Qinzhe Wu, Jonathan Beard, Ashen Ekanayake, Andreas Gerstlauer, and Lizy K. John


High-Level Synthesis of Parallel Specifications Coupling Static and Dynamic Controllers
Vito Giovanni Castellana, Antonino Tumeo, and Fabrizio Ferrandi


RVMA: Remote Virtual Memory Access
Ryan E. Grant, Michael J. Levenhagen, Matthew G.F. Dosanjh, and Patrick M. Widener

10:00-11:30 AM PDT

Parallel Technical Sessions 5, 6, 7, & 8

SESSION 5: Graph Algorithms

Session Chair: Antonino Tumeo


Performance-Portable Graph Coarsening for Efficient Multilevel Graph Analysis
Michael S. Gilbert, Seher Acer, Erik G. Boman, Kamesh Madduri, and Sivasankaran Rajamanickam


Efficient Distributed Algorithms in the k-machine Model via PRAM Simulations
John Augustine, Kishore Kothapalli, and Gopal Pandurangan


Euler Meets GPU: Practical Graph Algorithms with Theoretical Guarantees
Adam Polak, Adrian Siwiec, and Michał Stobierski


MultiLogVC: Efficient Out-of-Core Graph Processing Framework for Flash Storage
Kiran Kumar Matam, Hanieh Hashemi, and Murali Annavaram


FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks
Md. Khaledur Rahman, Majedul Haque Sujon, and Ariful Azad



SESSION 6: Resilience

Session Chair: Scott Levy


Systemic Assessment of Node Failures in HPC Production Platforms
Anwesha Das, Frank Mueller, and Barry Rountree


Combining XOR and Partner Checkpointing for Resilient Multilevel Checkpoint/Restart
Masoud Gholami and Florian Schintke


Demystifying GPU Reliability: Comparing and Combining Beam, Fault Simulation, and Profiling
Fernando Fernandes dos Santos, Siva Kumar Sastry Hari, Pedro Martins Basso, Luigi Carro, and Paolo Rech


Improving checkpointing intervals by considering individual job failure probabilities
Alvaro Frank, Manuel Baumgartner, Reza Salkhordeh, and André Brinkmann


Covirt: Lightweight Fault Isolation and Resource Protection for Co-Kernels
Nicholas Gordon and John R. Lange



SESSION 7: Systems 1

Session Chair: Matthew Dosanjh


Introducing Application Awareness Into a Unified Power Management Stack
Daniel C. Wilson, Siddhartha Jana, Aniruddha Marathe, Stephanie Brink, Christopher M. Cantalupo, Diana R. Guttman, Brad Geltz, Lowren H. Lawson, Asma H. Al-rawi, Ali Mohammad, Fuat Keceli, Federico Ardanaz, Jonathan M. Eastep, and Ayse K. Coskun


PALM: Progress- and Locality-Aware Adaptive Task Migration for Efficient Thread Packing
Jinsu Park, Seongbeom Park, Myeonggyun Han, and Woongki Baek


Performance Evaluation of Adaptive Routing on Dragonfly-based Production Systems
Sudheer Chunduri, Kevin Harms, Taylor Groves, Peter Mendygral, Justs Zarins, Michele Weiland, and Yasaman Ghadar


Cori: Dancing to the Right Beat of Periodic Data Movements over Hybrid Memory Systems
Thaleia Dimitra Doudali, Daniel Zahka, and Ada Gavrilovska


Nowa: A Wait-Free Continuation-Stealing Concurrency Platform
Florian Schmaus, Nicolas Pfeiffer, Timo Hönig, Jörg Nolte, and Wolfgang Schröder-Preikschat



SESSION 8: Algorithms 1

Session Chair: Dingwen Tao


Efficient Algorithms for Encrypted All-gather Operation
Mehran Sadeghi Lahijani, Abu Naser, Cong Wu, Mohsen Gavahi, Viet Tung Hoang, Zhi Wang, and Xin Yuan


CBNet: Minimizing Adjustments in Concurrent Demand-Aware Tree Networks
Otavio Augusto de Oliveira Souza, Olga Goussevskaia, and Stefan Schmid


Scaling Sparse Matrix Multiplication on CPU-GPU Nodes
Yang Xia, Peng Jiang, Gagan Agrawal, and Rajiv Ramnath


zMesh: Exploring Application Characteristics to Improve Lossy Compression Ratio for Adaptive Mesh Refinement
Huizhang Luo, Junqi Wang, Qing Liu, Jieyang Chen, Scott Klasky, and Norbert Podhorszki


Efficient parallel CP decomposition with pairwise perturbation and multi-sweep dimension tree
Linjian Ma and Edgar Solomonik


WEDNESDAY - 19 May 2021


7:00 AM PDT

IPDPS 2021 Program Committee Report

7:30-8:30 AM PDT


Title: 12 Ways to Fool the Masses with Irreproducible Results

Session Chair: Karen Karavanic


Lorena Barba
George Washington University

8:30-10:30 AM PDT

Plenary Session:
Best Papers

Best Papers


Session Chair: Richard Vuduc


Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström, Ivan Walulya, Marina Papatriantafilou, and Philippas Tsigas


Redesigning Peridigm on SIMT Accelerators for High-performance Peridynamics Simulations
Xinyuan Li, Huang Ye, and Jian Zhang


Designing High-Performance MPI Libraries with On-the-fly Compression for Modern GPU Clusters
Q. Zhou, C. Chu, N.S. Kumar, P. Kousha, S.M. Ghazimirsaeed, H. Subramoni, and D.K. Panda


xBGAS: A Global Address Space Extension on RISC-V for High Performance Computing
Xi Wang, John D. Leidel, Brody Williams, Alan Ehret, Miguel Mark, Michel A. Kinsy, and Yong Chen

10:30-11:30 AM PDT

Parallel Technical Sessions 9, 10, 11, 12, & 13

SESSION 9: Programming Models & Compilers

Session Chair: Narasinga Miniskar


ARBALEST : Dynamic Detection of Data Mapping Issues in Heterogeneous OpenMP Applications
Lechen Yu, Joachim Protze, Oscar Hernandez, and Vivek Sarkar


Spray: Sparse Reductions of Arrays in OpenMP
Jan Hückelheim and Johannes Doerfert


Code Generation for Room Acoustics Simulations with Complex Boundary Conditions
Larisa Stoltzfus, Brian Hamilton, Michel Steuwer, Lu Li, and Christophe Dubach


Temporal blocking of finite-difference stencil operators with sparse "off-the-grid'' sources
George Bisbas, Fabio Luporini, Mathias Louboutin, Rhodri Nelson, Gerard J. Gorman, and Paul H.J. Kelly



SESSION 10: Algorithms 2

Session Chair: Weifeng Liu


Accelerating non-power-of-2 size Fourier transforms with GPU Tensor Cores
Louis Pisha and Łukasz Ligowski


Parallel String Graph Construction and Transitive Reduction for De Novo Genome Assembly
Giulia Guidi, Oguz Selvitopi, Marquita Ellis, Leonid Oliker, Katherine Yelick, and Aydin Buluç


Distributed-Memory K-mer Counting on GPUs
Israt Nisa, Prashant Pandey, Marquita Ellis, Leonid Oliker, Aydın Buluc, and Katherine Yelick


Distributed-memory multi-GPU block-sparse tensor contraction for electronic structure
Thomas Herault, Yves Robert, George Bosilca, Robert J. Harrison, Cannada A. Lewis, Edward F. Valeev, and Jack J. Dongarra



SESSION 11: Systems 2

Session Chair: George Michelogiannakis


Adaptive Spatially Aware I/O for Multiresolution Particle Data Layouts
Will Usher, Xuan Huang, Steve Petruzza, Sidharth Kumar, Stuart R. Slattery, Sam T. Reeve, Feng Wang, Chris R. Johnson, and Valerio Pascucci


Interpreting Write Performance of Supercomputer I/O Systems with Regression Models
Bing Xie, Zilong Tan, Philip Carns, Jeff Chase, Kevin Harms, Jay Lofstead, Sarp Oral, Sudharshan S. Vazhkudaik, and Feiyi Wang


Finer-LRU: A Scalable Page Management Scheme for HPC Manycore Architectures
Jiwoo Bang, Chungyong Kim, Sunggon Kim, Qichen Chen, Cheongjun Lee, Eun-Kyu Byun, Jaehwan Lee, and Hyeonsang Eom


Arbitration Policies for On-Demand User-Level I/O Forwarding on HPC Platforms
Jean Luca Bez, Alberto Miranda, Ramon Nou, Francieli Zanon Boito, Toni Cortes, and Philippe Navaux


A hybrid scheduling scheme for parallel loops
Aaron Handleman, Arthur G. Rattew, I-Ting Angelina Lee, and Tao B. Schardl



SESSION 12: Neural Networks

Session Chair: Roshan Dathathri


EAGLE: Expedited Device Placement with Automatic Grouping for Large Models (EAGLE italics)
Hao Lan, Li Chen, and Baochun Li


BiPS: Hotness-aware Bi-ti er Parameter Synchronization for Recommendation Models
Qiming Zheng, Quan Chen, Kaihao Bai, Huifeng Guo, Yong Gao, Xiuqiang He, and Minyi Guo


DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions
Yuke Wang, Boyuan Feng, and Yufei Ding


SUPER: SUb-Graph Parallelism for TransformERs
Arpan Jain, Tim Moon, Tom Benson, Hari Subramoni, Sam Ade Jacobs, Dhabaleswar K Panda, and Brian Van Essen



SESSION 13: Federated Learning and Science

Session Chair: Karen L. Karavanic


Scalable Epidemiological Workflows to Support COVID-19 Planning and Response
Dustin Machi, Parantapa Bhattacharya, Stefan Hoops, Jiangzhuo Chen, Henning Mortveit, Srinivasan Venkatramanan, Bryan Lewis, Mandy Wilson, Arindam Fadikar, Tom Maiden, Christopher L. Barrett, and Madhav V. Marathe


Facilitating Data Discovery for Large-scale Science Facilities using Knowledge Networks
Yubo Qin, Ivan Rodero, and Manish Parashar


Optimal Task Assignment for Heterogeneous Federated Learning Devices
Laercio Lima Pilla


Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder
Zhipin Gu and Yuexiang Yang

11:30-12:00 PM PDT Break on your own
12:00-1:00 PM PDT


Title: Is Asymptotic Cost Analysis Useful in Developing Practical Parallel Algorithms

Session Chair: Viktor Prasanna


Guy Blelloch
Carnegie Mellon University
2021 IEEE Computer Society Babbage Award Recipient

THURSDAY - 20 May 2021


7:00 AM PDT

Conference Report on 35th Year & IPDPS 2022

7:30-8:30 AM PDT


Title: From Parallelization to Customization – Challenges and Opportunities

Session Chair: Karen Karavanic


Jason (Jingsheng) Cong
University of California, Los Angeles

8:30-10:00 AM PDT

Parallel Technical Sessions 14, 15, 16, & 17

SESSION 14: Algorithms 3

Session Chair: Albert-Jan Yzelman


High Performance Streaming Tensor Decomposition
Yongseok Soh, Patrick Flick, Xing Liu, Shaden Smith, Fabio Checconi, Fabrizio Petrin, and Jee Choi


Plex: Scaling Parallel Lexing with Backtrack-Free Prescanning
Le Li, Shigeyuki Sato, Qiheng Liu, and Kenjiro Taura


Speculative Parallel Reverse Cuthill-McKee Reordering on Multi- and Many-core Architectures
Daniel Mlakar, Martin Winter, Mathias Parger, and Markus Steinberger


Jigsaw: A Slice-and-Dice Approach to Non-uniform FFT Acceleration for MRI Image Reconstruction
Brendan L. West, Jeffrey A. Fessler, and Thomas F. Wenisch


Rank Position Forecasting in Car Racing
Bo Peng, Jiayu Li, Selahattin Akkas, Takuya Araki, Ohno Yoshiyuki, and Judy Qiu



SESSION 15: Cloud Performance

Session Chair: Omar Aaziz


Towards Practical Cloud Offloading for Low-cost Ground Vehicle Workloads
Yuan Xu, Tianwei Zhang, Jimin Han, Sa Wang, and Yungang Bao


Towards Internet-Scale Convolutional Root-Cause Analysis with DiagNet
Loïck Bonniot, Christoph Neumann, and François Taïani


Astra: Autonomous Serverless Analytics with Cost-Efficiency and QoS-Awareness
Jananie Jarachanthan, Li Chen, Fei Xu, and Bo Li


Max-Stretch Minimization on an Edge-Cloud Platform
Anne Benoit, Redouane Elghazi, and Yves Robert


Decentralized Low-Latency Task Scheduling for Ad-Hoc Computing
Janick Edinger, Martin Breitbach, Niklas Gabrisch, Dominik Schafer, Christian Becker, and Amr Rizk



SESSION 16: Systems 3

Session Chair: Dingwen Tao


Lightweight Function Monitors for Fine-Grained Management in Large Scale Python Applications
Tim Shaffer, Zhuozhao Li, Ben Tovar, Yadu Babuji, TJ Dasso, Zoe Surma, Kyle Chard, Ian Foster, and Douglas Thain


AlphaR: Learning-Powered Resource Management for Irregular, Dynamic Microservice Graph
Xiaofeng Hou, Chao Li, Jiacheng Liu, Lu Zhang, Shaolei Ren, Jingwen Leng, Quan Chen, and Minyi Guo


Deep Reinforcement Agent for Scheduling in HPC
Yuping Fan, Zhiling Lan, Taylor Childers, Paul Rich, William Allcock, and Michael E. Papka


F-Write: Fast RDMA-supported Writes in Erasure-coded In-memory Clusters
Bin Xu, Jianzhong Huang, Qiang Cao, Xiao Qin, and Ping Xie


Argus: Efficient Job Scheduling in RDMA-assisted Big Data Processing
Sijie Wu, Hanhua Chen, Yonghui Wang, and Hai Jin



SESSION 17: GPU Computing

Session Chair: Ang Li


Scaling Out a Combinatorial Algorithm for Discovering Carcinogenic Gene Combinations to Thousands of GPUs
Sajal Dash, Qais Al-Hajri, Wu-chun Feng, Harold R Garner, and Ramu Anandakrishnan


A Multi-GPU Design for Large Size Cryo-EM 3D Reconstruction
Zihao Wang, Xiaohua Wan, Zhiyong Liu, Qianshuo Fan, Fa Zhang, and Guangming Tan


Accelerating Multigrid-based Hierarchical Scientific Data Refactoring on GPUs
Jieyang Chen, Lipeng Wan, Xin Liang, Ben Whitney, Qing Liu, David Pugmire, Nicholas Thompson, Jong Youl Choi, Matthew Wolf, Todd Munson, Ian Foster, and Scott Klasky


Extremely Fast and Energy Efficient One-way Wave Equation Migration on GPU-based heterogeneous architecture
Long Qu, Loris Lucido, Marie Bonnasse-Gahot, Pascal Vezolle, and Diego Klahr


Revisiting Huffman Coding: Toward Extreme Performance on Modern GPU Architectures
Jiannan Tian, Cody Rivera, Sheng Di, Jieyang Chen, Xin Liang, Dingwen Tao, and Franck Cappello


10:00-11:30 AM PDT

Parallel Technical Sessions 18, 19, 20, & 21

SESSION 18: Systems 4

Session Chair: Dong Dai


Rack-Scaling: An Efficient Rack Based Redistribution Method to Accelerate the Scaling of Cloud Disk Arrays
Zhehan Lin, Hanchen Guo, Chentao Wu, Jie Li, Guangtao Xue, and Minyi Guo


Optimizing Performance for Open-Channel SSDs in Cloud Storage System
Xiaoyi Zhang, Feng Zhu, Shu Li, Kun Wang, Wei Xu, and Dengcai Xu


AuTraScale: An Automated and Transfer Learning Solution for Streaming System Auto-Scaling
Liang Zhang, Wenli Zheng, Chao Li, Yao Shen, and Minyi Guo


SNOW Revisited: Understanding When Ideal READ Transactions Are Possible
Kishori M. Konwar, Wyatt Lloyd, Haonan Lu, and Nancy Lynch


QoS-Aware and Resource Efficient Microservice Deployment in Cloud-Edge Continuum
Kaihua Fu, Wei Zhang, Quan Chen, Xin Peng, Wenli Zheng, and Minyi Guo



SESSION 19: Algorithms 4

Session Chair: Ariful Azad


Byzantine Dispersion on Graphs
Anisur Rahaman Molla, Kaushik Mondal, and William K. Moses Jr.


Byzantine Agreement with Unknown Participants and Failures
Pankaj Khanchandani and Roger Wattenhofer


QPR: Quantizing PageRank with Coherent Shared Memory Accelerators
Abdullah T. Mughrabi, Mohannad Ibrahim, and Gregory T. Byrd


Distributed Training of Embeddings using Graph Analytics
Gurbinder Gill, Roshan Dathathri, Saeed Maleki, Madan Musuvathi, Todd Mytkowicz, and Olli Saarikivi


Multiplicative Weights Algorithms for Parallel Automated Software Repair
Joseph Renzullo, Westley Weimer, and Stephanie Forrest



SESSION 20: Deep Neural Networks and Learning

Session Chair: Brian Van Essen


An In-Depth Analysis of Distributed Training of Deep Neural Networks
Yunyong Ko, Kibong Choi, Jiwon Seo, and Sang-Wook Kim


Automatic Graph Partitioning for Very Large-scale Deep Learning
Masahiro Tanaka, Kenjiro Taura, Toshihiro Hanawa, and Kentaro Torisawa


Extending Sparse Tensor Accelerators to Support Multiple Compression Formats
Eric Qin, Geonhwa Jeong, William Won, Sheng-Chun Kao, Hyoukjun Kwon, Sudarshan Srinivasan, Dipankar Das, Gordon E. Moon, Sivasankaran Rajamanickam, and Tushar Krishna


PaSE: Parallelization Strategies for Efficient DNN Training
Venmugil Elango


Efficient Video Captioning on Heterogeneous System Architectures
Horng-Ruey Huang, Ding-Yong Hong, Jan-Jan Wu, Pangfeng Liu, and Wei-Chung Hsu



SESSION 21: Architecture 2

Session Chair: Wu Feng


SRNoC: A Statically-Scheduled Circuit-Switched Superconducting Race Logic NoC
George Michelogiannakis, Darren Lyles, Patricia Gonzalez-Guerrero, Meriam Bautista, Dilip Vasudevan, and Anastasiia Butko


Matrix Engines for High Performance Computing: A Paragon of Performance or Grasping at Straws?
Jens Domke, Emil Vatai, Aleksandr Drozd, Peng Chen, Yosuke Oyama, Lingqi Zhang, Shweta Salaria, Daichi Mukunoki, Artur Podobas, Mohamed Wahib, and Satoshi Matsuoka


Performance Analysis of Scientific Computing Workloads on General Purpose TEEs
Ayaz Akram, Anna Giannakou, Venkatesh Akella, Jason Lowe-Power, and Sean Peisert


High-Performance Spectral Element Methods on Field-Programmable Gate Arrays
Martin Karp, Artur Podobas, Niclas Jansson, Tobias Kenter, Christian Plessl, Philipp Schlatter, and Stefano Markidis


High-Level FPGA Accelerator Design for Structured-Mesh-Based Explicit Numerical Solvers
Kamalavasan Kamalakkannan, Gihan R. Mudalige, Istvan Z. Reguly, and Suhaib A. Fahmy

11:30-12:00 PM PDT Break on your own
12:00 PM PDT

IPDPS Community Meeting & Conference Awards

FRIDAY - 21 May 2021




Visit individual
websites at
links shown



Job Scheduling Strategies for Parallel Processing



Parallel and Distributed Scientific and Engineering Computing



Automatic Performance Tuning



Scalable Networks for Advanced Computing Systems Workshop



Parallel AI and Systems for the Edge



Resource Arbitration for Dynamic Runtimes



Scalable Deep Learning over Parallel And Distributed Infrastructures



High-Performance Storage



Parallel and Distributed Processing for Computational Social Systems


Register Today

Deadline for Lower Fees
Extended to May 14th

Registration Details

Search IPDPS


Follow IPDPS


Tweets by @IPDPS

IPDPS 2020 Report

34th IEEE International Parallel & Distributed Processing Symposium

May 18-22, 2020
New Orleans, Louisiana USA

(Held virtually during the week of the conference)