Go to Dana Petcu's homepage


PARALLEL COMPUTING:


Lectures (28h) - topics:

  1. Lecture 1. Introduction: Parallel computers, why parallel computing, application examples, short history, to port or not to port. Performance: overhead, performance metrics for parallel systems
  2. Lecture 2. Performance Metrics for Parallel Programs: analystic modeling, execution time, overhead, speedup, efficiency, cost, granularity, scalability, roadblocks, asymptotic analysis
  3. Lecture 3. Architecture: logical organization - Flynn taxonomy, SIMD, MIMD, communication; physical organization - historical context, shared memory versus distributed memory
  4. Lecture 4. Architecture and Models: physical organization - radius-based classification, multicore, clusters, grids, trends; early models, PRAM
  5. Lecture 5. Models: dataflow and systolic architectures, circuit model, graph model, LogP and LogGP; message-passing paradigm; levels of parallelism
  6. Lecture 6. Implicit Parallelism - Instruction Level Parallelism. Pipeline, Vector and Superscalar Processors
  7. Lecture 7. Cache coherence in multiprocessor systems. Interconnection Networks - classification, topologies, evaluating static and dysnamic interconnection networks
  8. Lecture 8. Communication costs, routing mechanism, mapping techniques, cost-performance tradeoffs
  9. Lecture 9. Concurrency and Steps in Parallel Algoritm Design:concurrency in parallel programs, approaches to achieve concurrency, basic leyers of software concurrency; tasks, processes and processors, design steps,decomposition - simple examples and classification
  10. Lecture 10. Decomposition and Orchestration: recursive, data, exploratoty, speculative and hybrid decompositions, ochestration under the data parallel, shared-address space and message passing model
  11. Lecture 11. Mapping Techniques for Load Balancing and Methods for Containing Interaction Overheads: mapping classification, schemes for static mapping, schemes for dynamic mapping, maximizing data locality, overlapping computations with interactions, replication, optimized collective interactions
  12. Lecture 12. Emulations, Scheduling and Patterns: emulations among architectures, task scheduling problem, scheduling algorithms, load balancing; patterns - task decomposition, data decomposition, group tasks, order tasks, data sharing, design evaluation
  13. Lecture 13. Models of Parallel Algoritms and Simple Parallel Algorithms: models - data parallel, task graph, work pool, master-slave, pipeline, hybrids; applying data parallel model, bilding-block computations; sorting networks
  14. Lecture 14. (Curs 14.) Parallel computations in numerical analysis: linear equations, nonlinear equations, ordinary differential equations, computational fluid dynamic (slides only in Romanian).


Labs (14h) - topics:

  1. Lab 1 OpenMP - generalities, simple examples
  2. Lab 2 OpenMP - matrix operations and performance studies
  3. Lab 3 OpenMP - sorting and performance studies
  4. Lab 4 MPI - generalities, simple examples
  5. Lab 5 MPI - matrix operations and performance studies
  6. Lab 6 MPI - solving linear systems and performance studies
  7. Lab 7 OpenACC - generalities, simple example and matrix operations

Textbooks:

  1. D.Petcu, Parallel computing in English

Schedule in Spring semester of 2024/2025

Weekly meetings:

  1. Friday in English, 0048, 16:20-17:50 Lecture, 18:00-19:30 Lab (AIDC + BD in odd weeks; ISR in even weeks),


In English:

Week Date 16:20 18:00 Remark
1 28 Feb 2025 Lecture 1 Lab 1
2 7 Mar 2025 Lecture 2 Lab 1 Room change: F108
3 14 Mar 2025 Lecture 3 Lab 2
4 21 Mar 2025 Lecture 4 Lab 2
5 28 Mar 2025 Lecture 5 Lab 3 Room change: F108
6 4 Apr 2025 Lecture 6 Lab 3
7 11 Apr 2025 Lecture 7 Lab 4
8 18 Apr 2025 - - Easter's Friday
- 25 Apr 2025 - - Easter break
9 2 May 2025 Lecture 8 Lab 4 Potentially in advance
10 9 May 2025 Lecture 9 Lab 5
11 16 May 2025 Lecture 10 Lab 5
12 23 May 2025 Lecture 11 Lab 6
13 30 May 2025 Lecture 12 Lab 6
14 6 Jun 2025 Lecture 13 Lab 7



Links for labs:

  1. Infrastructure to use: (BID Cluster, MOISE)

References for lectures:

  1. Kontoghiorghes Erricos J. ed. Handbook of Parallel Computing and Statistics, Chapman & Hall/CRC, Taylor & Francis Group, 2006
  2. Wittwer Tobias. An Introduction to Parallel Programming, VSSD, Netherlands, 2006
  3. Zbigniew, Czech, Introduction to parallel computing, Cambridge University Press, 2016

References for labs:

  1. Karniadakis George E., Kirby Robert M. Parallel Scientific Computing in C++ and MPI, Cambridge University Press, 2003.
  2. Barbara Chapman, Gabriele Jost, Ruud van van der Pas, Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation), MIT Press, 2007

Last modification: February 2025