CS639 Undergraduate Elective Topics in
Computing:

Parallel and Throughput-Optimized Programming

Spring Semester 2020

Course outline

Modern computing platforms offer dramatically increased computational capabilities compared to typical systems in relatively recent generations. Today, a consumer-grade gaming desktop can rival a typical medium-scale cluster from the early 2010’s, while a well-equipped single-chassis server may pack computational power commensurate with supercomputers that would appear in the TOP500 list about 8 years ago. This dramatic increase in computational density, however, comes with significant new challenges for the platform programmer that seeks to extract optimal performance: algorithms that exploit the full potential of modern computers need to be properly designed as to be in sync with parallel programming paradigms, and be more aware than ever of the idiosyncrasies of the underlying computing architecture.

This (new and experimental) course aspires to discuss challenges as well as best practices for the design of high-performance codes, with a depth and scope tailored to be accessible to undergraduates with modest programming experience. Rather than regarding the algorithm being used as an immutable specification (and focusing on the APIs that can help engineer a parallel implementation), we shall test and often alter then algorithmic approach itself in order to create better conditions for a high-efficiency parallel implementation. At the same time, we will attempt to deepen our awareness of the intricate architectural traits of the computing platform to better understand obstacles against and opportunities for optimal efficiency.

In the first offering of this Topics course in Spring 2020, the scope will be consciously kept narrow as to allow for adequate depth and analysis of the topics covered. Specifically, we will emphasize shared-memory, single-chassis multiprocessor systems as our primary target platform (with GPUs garnering some, but limited coverage), and forego highly heterogeneous platforms or distributed systems such as network-connected multi-node clusters. Our application focus and case studies will also be drawn primarily from numerical algorithms, scientific computing and computational engineering (with higher-level applications such as image processing, computational physics or machine learning naturally emerging from those). We will not explicitly emphasize combinatorial workloads (e.g. search and hashing), algorithms that rely heavily on unstructured, random data access, or applications that do not have high-throughput processing as a central design objective.

Programming paradigms, design practices, and platform considerations to be discussed in class may include:

- Multithreaded programming; Synchronization; Using the OpenMP API
- Instruction Level Parallelism; Vectorization and challenges; SIMD intrinsics
- Memory hierarchy and its implications; Caches; Virtual Memory
- Assessing efficiency, predicting parallel potential, and benchmarking performance
- Understanding the role of compute and/or memory throughput as a limiting factor of performance
- Optimizing data structures for target architecture; Memory allocation and management

Topics from which case studies and sample workloads will be drawn include the following facets of scientific computing:

- Sparse linear algebra; Matrix representations; Iterative solvers for sparse systems
- Dense linear algebra; Matrix/Vector operations; Matrix Factorizations; Using the MKL library
- Grid and stencil computations; Convolutions and their use in neural networks
- Fourier transforms; Eigenvalue problems; PCA and Singular Value Decomposition
- Optimization methods; Least-squares and approximation; Descent methods

General information

Lecture meeting time : Tue/Thu 2:30pm -
3:45pm

Lecture location : Grainger Hall 2080

Instructor : Eftychios Sifakis

Office : Computer Sciences building, Room 6387

Email : sifakis <at> cs <dot> wisc <dot> edu

Prerequisites : Working knowledge of the C programming language is presumed, as well as familiarity with principles of machine organization. CS354 or equivalent is strongly recommended (can be waived with instructor consent). Familiarity with basic linear algebra is desirable, but no formal prerequisite is enforced.

Schedule of lectures

DATE | Lecture Information | Assignments & Reading Materials |

Tuesday, January 21st |
Introduction to CS639 |
Lecture Notes [PDF] |

Thursday, January 23rd |
Discussion of different types of Concurrency |
Review Ruud van der Pas' OpenMP slides [PDF] |

Tuesday, January 28th |
Introduction to Stencil operations on Grids. Implementation and evaluation of variants of a Laplacian kernel. |
Lecture Notes [PDF] |

Thursday, January 30th |
Stencil operations on Grids (continued discussion of Laplacian kernel). Introduction to Vectorization and SIMD processing. |
Lecture Notes [PDF] |

Tuesday, February 4th |
Introduction to Vectorization and SIMD processing (cont'd). Introduction to Sparse Linear Solvers (using Stencils) |
Lecture Notes [PDF] |

Thursday, February 6th |
Matrix-Free Sparse Solvers (Laplace equation Part #1) |
Lecture Notes [PDF] |

Tuesday, February 11th |
Code Review : A Matrix-Free solver for the 3D Poisson Equation (Part I) (Factorization of code into kernels, reductions, and parallelization considerations) |
Continuation of notes from Feb 4th. Review code in our repository, at
subdirectory |

Thursday, February 13th |
Code Review : A Matrix-Free solver for the 3D Poisson Equation (Part II) (Kernel aggregation, aggregate timing). Introduction to Sparse Matrix Formats |
Continuation of notes from Feb 4th. |