The 41st IPP Symposium

A Scalable Software Transactional Memory System for the Chapel High-Productivity Language

Srinivas Sridharan, University of Notre Dame

Chapel is a parallel language being developed by Cray Inc as part of the DARPA-led High Productivity Computing Systems program (HPCS).

Chapel strives to increase productivity by supporting higher levels of abstraction compared to current parallel programming models while supporting the ability to incrementally optimize for performance as and when the programmer chooses. In this talk, we present our on-going work in the design and implementation of a scalable Software Transactional Memory (STM) system for the Chapel High-Productivity Language. This talk is divided into three parts. First, we provide an overview of the key Chapel concepts and briefly describe the overall philosophy of the language. Next, we present the motivation behind Chapel's support for atomic blocks as a means for specifying transactional code segments. Finally, we describe Global Transactional Memory (GTM), Chapel's current Software Transactional Memory system. GTM is the first STM library to support a non-blocking interface, enabling data/metadata operations to execute asynchronously with respect to the caller and exposing new opportunities for parallelism within a transaction. We describe GTM's API and implementation framework, and present some preliminary scalability and performance results. This work is a collaborative effort involving Brad Chamberlain of Cray Inc, Jeffrey Vetter of the Future Technologies Group (ORNL), and Peter Kogge and Srinivas Sridharan of the University of Notre Dame.

Bio: Srinivas Sridharan is pursuing his Ph.D. in Computer Science and Engineering at the University of Notre Dame and is advised by Dr. Peter Kogge. His current research interests include: designing STM support for the Chapel Language, and extending STM to large-scale distributed memory and PGAS systems. In the past, he has worked on implementing scalable synchronization algorithms for Processing-In-Memory architectures and extending hardware TM to speedup Message Passing Interface (MPI) applications.

Talk slides