Course details

Practical Parallel Programming

PPP Acad. year 2024/2025 Summer semester 5 credits

The course covers architecture and programming of parallel systems with functional and data parallelism. First, the parallel system theory and program parallelization are discussed. The detailed description of most proliferated supercomputing systems, interconnection network typologies and routing algorithms is followed by the architecture of parallel and distributed storage systems. The course goes on in message passing programming in standardized interface MPI. Consequently, techniques for parallel debugging and profiling are discussed. Last part of the course is devoted to the description of parallel programming patterns and case studies from the are of linear algebra, physical systems described by partial differential equations, N-Body systems and Monte-Carlo methods. 

Guarantor

Course coordinator

Language of instruction

Czech

Completion

Classified Credit

Time span

  • 26 hrs lectures
  • 16 hrs pc labs
  • 10 hrs projects

Assessment points

  • 40 pts written tests
  • 60 pts projects

Department

Lecturer

Instructor

Learning objectives

To get familiar with the architecture of distributed supercomputing systems, their interconnection networks and storage. To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. Learn how to write portable programs using standardized interfaces and languages, specify parallelism and process communication. To learn how to practically use supercoputer for solving complex engineering problems.
Overview of principles of current parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI. Knowledge of basic parallel programming patterns. Practical experience with the work on supercomputers, ability to identify performance issues and propose their solution.
Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.

Prerequisite knowledge and skills

Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in C/C++. Knowledge gained in courses PRL and AVS.

Study literature

Fundamental literature

  • Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605 URL: download
  • Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar: Introduction to Parallel Computing, Addison-Wesley, 2003, 978-0201648652.Slides: download
  • Victor Eijkhout: Parallel Programming in MPI and OpenMP Full book: download web version: https://theartofhpc.com/pcse/
  • William Gropp, Ewing Lusk, Anthony Skjellum: Using MPI - 2nd Edition: Portable Parallel Programming with the Message Passing InterfaceUsing MPI - 2nd Edition: Portable Parallel Programming with the Message Passing Interface, MIT Press, 978-0262571326
  • Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7.

Syllabus of lectures

  1. Introduction to parallel processing
  2. Programming patterns for parallel programming
  3. Message passing interface, pair-wise communications.
  4. Collective communications.
  5. Communicators and typologies
  6. Datatypes 
  7. One-sided communications.
  8. MPI-IO 
  9. Lustre, HDF5
  10. Parallel code profiling and tracing.
  11. Hybrid programming OpenMP/MPI
  12. Technologies of interconnection networks (Infiniband), topology and routing algorithms, switching, flow control.
  13. Case studies: Fluid dynamics, N-Body systems, Monte-Carlo.

Syllabus of computer exercises

  1. MPI: Point-to-point communications
  2. MPI: Collective communications
  3. MPI: Communicators
  4. MPI: Data types
  5. MPI: One-sided communications
  6. MPI: Parallel IO
  7. HDF5: Parallel IO
  8. Profiling and tracing of parallel applications

Syllabus - others, projects and individual work of students

  • A parallel program in MPI on the supercomputer.

Progress assessment

  • Assignment 60b
    • 30b implementation
    • 20b documentation and measuremetns
    • 10b oral defense
  • Written tests 40b
    • 5 short tests, 8b each

Credit Requirements:

  • A minimum of 30 points from the project and a minimum of 20 points from ongoing tests.

Schedule

DayTypeWeeksRoomStartEndCapacityLect.grpGroupsInfo
Mon lecture 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 12., 13. of lectures D0206 11:0012:50154 1MIT 2MIT NBIO NEMB NHPC xx Jaroš
Wed comp.lab 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13. of lectures O204 08:0009:5020 1MIT 2MIT xx Nečasová
Thu comp.lab 3., 4., 5., 6., 7., 8., 9., 10., 11. of lectures O204 14:0015:5020 1MIT 2MIT xx Nečasová
Fri comp.lab 3., 4., 5., 6., 7., 8., 9., 11., 12., 13. of lectures O204 08:0009:5020 1MIT 2MIT xx Nečasová
Fri comp.lab 3., 4., 5., 6., 7., 8., 9., 11., 12., 13. of lectures O204 10:0011:5020 1MIT 2MIT xx Nečasová

Course inclusion in study plans

Back to top