Go to Dana Petcu's homepage


PARALLEL COMPUTING:


Lectures (28h) - topics:

  1. Lecture 1. (Curs 1.) Introduction: Parallel computers, why parallel computing, application examples, short history, to port or not to port. Performance: overhead, performance metrics for parallel systems
  2. Lecture 2. (Curs 2.) Performance Metrics for Parallel Programs: analystic modeling, execution time, overhead, speedup, efficiency, cost, granularity, scalability, roadblocks, asymptotic analysis
  3. Lecture 3. (Curs 3.) Architecture: logical organization - Flynn taxonomy, SIMD, MIMD, communication; physical organization - historical context, shared memory versus distributed memory
  4. Lecture 4. (Curs 4.) Architecture and Models: physical organization - radius-based classification, multicore, clusters, grids, trends; early models, PRAM
  5. Lecture 5. (Curs 5.) Models: dataflow and systolic architectures, circuit model, graph model, LogP and LogGP; message-passing paradigm; levels of parallelism
  6. Lecture 6. (Curs 6.) Implicit Parallelism - Instruction Level Parallelism. Pipeline, Vector and Superscalar Processors
  7. Lecture 7. (Curs 7.) Cache coherence in multiprocessor systems. Interconnection Networks - classification, topologies, evaluating static and dysnamic interconnection networks
  8. Lecture 8. (Curs 8.) Communication costs, routing mechanism, mapping techniques, cost-performance tradeoffs
  9. Lecture 9. (Curs 9.) Concurrency and Steps in Parallel Algoritm Design:concurrency in parallel programs, approaches to achieve concurrency, basic leyers of software concurrency; tasks, processes and processors, design steps,decomposition - simple examples and classification
  10. Lecture 10. (Curs 10.) Decomposition and Orchestration: recursive, data, exploratoty, speculative and hybrid decompositions, ochestration under the data parallel, shared-address space and message passing model
  11. Lecture 11. (Curs 11.) Mapping Techniques for Load Balancing and Methods for Containing Interaction Overheads: mapping classification, schemes for static mapping, schemes for dynamic mapping, maximizing data locality, overlapping computations with interactions, replication, optimized collective interactions
  12. Lecture 12. (Curs 12.) Emulations, Scheduling and Patterns: emulations among architectures, task scheduling problem, scheduling algorithms, load balancing; patterns - task decomposition, data decomposition, group tasks, order tasks, data sharing, design evaluation
  13. Lecture 13. (Curs 13.) Models of Parallel Algoritms and Simple Parallel Algorithms: models - data parallel, task graph, work pool, master-slave, pipeline, hybrids; applying data parallel model, bilding-block computations; sorting networks
  14. Lecture 14. (Curs 14.) Parallel computations in numerical analysis: linear equations, nonlinear equations, ordinary differential equations, computational fluid dynamic (slides only in Romanian, book in English).


Labs (14h) - topics:

  1. Lab 1 OpenMP - generalities, simple examples
  2. Lab 2 OpenMP - matrix operations and performance studies
  3. Lab 3 OpenMP - sorting and performance studies
  4. Lab 4 MPI - generalities, simple examples
  5. Lab 5 MPI - matrix operations and performance studies
  6. Lab 6 MPI - solving linear systems and performance studies
  7. Lab 7 OpenACC - generalities, simple example and matrix operations

Textbooks:

  1. D.Petcu, Parallel computing in English - last update in 2020

Schedule in Spring semester of 2023/2024

Weekly meetings:

  1. Friday in English, 18:00-19:30 Lab (AIDC + BD, even weeks), 16:20-17:50 Lecture


In English:

Week Date 16:20 18:00
1 1 Mar 2024 Lecture 1 -
2 8 Mar 2024 Lecture 2 Lab 1
3 15 Mar 2024 Lecture 3 -
4 22 Mar 2024 Lecture 4 Lab 2
5 29 Mar 2024 Lecture 5 -
6 5 Apr 2024 Lecture 6 Lab 3
7 12 Apr 2024 Lecture 7 -
8 19 Apr 2024 Lecture 8 Lab 4
9 26 Apr 2024 Lecture 9 -
- 3 May 2024 - -
10 10 May 2024 Lecture 10 Lab 5
11 17 May 2024 Lecture 11 -
12 24 May 2024 Lecture 12 Lab 6
13 31 Jun 2024 Lectuer 13 -
14 7 Jun 2024 Lecture 14 Lab 7



Links for labs:

  1. Infrastructure to use: (BID Cluster, InfraGrid, Blue Gene)

References for lectures:

  1. Kontoghiorghes Erricos J. ed. Handbook of Parallel Computing and Statistics, Chapman & Hall/CRC, Taylor & Francis Group, 2006
  2. Wittwer Tobias. An Introduction to Parallel Programming, VSSD, Netherlands, 2006
  3. Zbigniew, Czech, Introduction to parallel computing, Cambridge University Press, 2016

References for labs:

  1. Karniadakis George E., Kirby Robert M. Parallel Scientific Computing in C++ and MPI, Cambridge University Press, 2003.
  2. Barbara Chapman, Gabriele Jost, Ruud van van der Pas, Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation), MIT Press, 2007

Last modification: February 2024