Menù principale
B024313 - PARALLEL COMPUTING
Main information
Teaching Language
Suggested readings
Prerequisites
Teaching Methods
Type of Assessment
Course program
Academic Year 2016-17
Coorte 2016 - Second Cycle Degree in Computer Engineering
Course year
First year - First Semester
Belonging Department
Information Engineering (DINFO)
Course Type
Single education field course
Scientific Area
ING-INF/05 - INFORMATION PROCESSING SYSTEMS
Credits
9
Teaching Hours
72
Teaching Term
19/09/2016 ⇒ 23/12/2016
Attendance required
No
Type of Evaluation
Final Grade
Course program
show
Lectureship
Teaching Language
All the slides are available in English.
Lectures are delivered in Italian and, on request, in English.
Lectures are delivered in Italian and, on request, in English.
Suggested readings (Search our library's catalogue)
- Principles of Parallel Programming, Calvin Lyn and Lawrence Snyder, Pearson
- Parallel Programming for Multicore and Cluster Systems, Thomas Dauber and Gudula Rünger, Springer
- Programming Massively Parallel Processors, David B. Kirk and Wen-mei W. Hwu, Morgan Kaufmann
- An introduction to Parallel Programming, Peter Pacheco, Morgan Kaufmann
- Parallel Programming for Multicore and Cluster Systems, Thomas Dauber and Gudula Rünger, Springer
- Programming Massively Parallel Processors, David B. Kirk and Wen-mei W. Hwu, Morgan Kaufmann
- An introduction to Parallel Programming, Peter Pacheco, Morgan Kaufmann
Prerequisites
Knowledge of C/C++ and Java.
Teaching Methods
Lectures (80%) and lab activity (20%)
Type of Assessment
- Mid-term paper and presentation (30% grade)
- Final programming project (70% grade)
For the mid-term paper and presentation each student is assigned a book chapter to be studied.
For the project it must be written a technical report and a presentation that describe the work done and reports the performance of the parallel version of the program vs. the sequential one.
Programming projects are chosen by the students from a list proposed by the instructor. They can be developed alone or in couple.
The goal of these projects is to show the capabilities of:
- knowing how to implement a parallel program using one (6 credit version of the course) or two (9 credits version of the course) frameworks and languages presented in the lectures
- knowing how to evaluate the effects and differences of parallel programming vs. sequential programming
- knowing how to measure the performance of a parallel program vs a sequential one
- knowing how to write a technical report and make a technical presentation.
- Final programming project (70% grade)
For the mid-term paper and presentation each student is assigned a book chapter to be studied.
For the project it must be written a technical report and a presentation that describe the work done and reports the performance of the parallel version of the program vs. the sequential one.
Programming projects are chosen by the students from a list proposed by the instructor. They can be developed alone or in couple.
The goal of these projects is to show the capabilities of:
- knowing how to implement a parallel program using one (6 credit version of the course) or two (9 credits version of the course) frameworks and languages presented in the lectures
- knowing how to evaluate the effects and differences of parallel programming vs. sequential programming
- knowing how to measure the performance of a parallel program vs a sequential one
- knowing how to write a technical report and make a technical presentation.
Course program
Types of parallelism (instructions, transactions,task, thread, memory, .)
Parallelism models (SIMD, MIMD, SPMD, .)
CPUs and parallel architectures
Design Patterns for parallel computing (Master/Worker, Message passing)
Parallelization strategies, task parallelism, data parallelism, work sharing
Parallel programming in C/C++ (C++11) and Java
Concurrent data structures
Multi-core processor programming
Shared memory parallelism; OpenMP
Multithreading
Distributed network programming
Distributed memory model; MPI
Hadoop; Apache Storm and Lambda architecture.
Overview of GPGPU, Hardware GPU
CUDA; CUDA compiler and tools
Accessing GPU memory
Stream and multi-GPU
Using CUDA libraries
Parallelism models (SIMD, MIMD, SPMD, .)
CPUs and parallel architectures
Design Patterns for parallel computing (Master/Worker, Message passing)
Parallelization strategies, task parallelism, data parallelism, work sharing
Parallel programming in C/C++ (C++11) and Java
Concurrent data structures
Multi-core processor programming
Shared memory parallelism; OpenMP
Multithreading
Distributed network programming
Distributed memory model; MPI
Hadoop; Apache Storm and Lambda architecture.
Overview of GPGPU, Hardware GPU
CUDA; CUDA compiler and tools
Accessing GPU memory
Stream and multi-GPU
Using CUDA libraries