|Module name (EN): Introduction to Parallel Programming with CUDA|
|Degree programme: Applied Informatics, Bachelor, ASPO 01.10.2017|
|Module code: PIB-CUDA|
|Hours per semester week / Teaching method: 1V+1P (2 hours per week)|
|ECTS credits: 3|
|Mandatory course: no|
|Language of instruction:
Project work, presentation and composition
|Applicability / Curricular relevance:
DFBI-342 Computer Science and Web Engineering, Bachelor, ASPO 01.10.2018, semester 6, optional course, informatics specific
KI593 (P222-0074) Computer Science and Communication Systems, Bachelor, ASPO 01.10.2014, semester 5, optional course, technical
KIB-CUDA (P222-0074) Computer Science and Communication Systems, Bachelor, ASPO 01.10.2017, semester 5, optional course, technical
PIBWI39 (P222-0074) Applied Informatics, Bachelor, ASPO 01.10.2011, semester 5, optional course, informatics specific
PIB-CUDA Applied Informatics, Bachelor, ASPO 01.10.2017, semester 5, optional course, informatics specific
30 class hours (= 22.5 clock hours) over a 15-week period.
The total student study time is 90 hours (equivalent to 3 ECTS credits).
There are therefore 67.5 hours available for class preparation and follow-up work and exam preparation.
|Recommended prerequisites (modules):
|Recommended as prerequisite for:
Dipl.-Inform. Marion Bohr
|Lecturer: Dipl.-Inform. Marion Bohr
CUDA (Compute Unified Device Architecture) is a technology developed by NVIDIA that allows software developers and software engineers to use a CUDA-enabled graphics processing unit for general purpose processing.
After successfully completing this module, students will have received insight into problem solving by means of parallel programming. They will understand the algorithmic basics of parallel programming. Students will be capable of using hardware and software components based on CUDA and demonstrate their use by carrying out clearly defined practical exercises. They will be able to leverage the strengths of a GPU architecture in practice-oriented project work, optimize its performance and analyze the resource requirements of a parallel implementation.
* Basics: processes, threads, blocks, warps, memory types, etc.
* Algorithmic basics
* Examples of algorithms and implementations for programs that can and cannot be parallelized
* Runtime measurement, runtime comparison, possibilities for increasing performance
* GPU applications from different subject areas using the example of CUDA
Presentation slides, board, exercises
|Recommended or required reading:
* The CUDA Handbook: A Comprehensive Guide to GPU Programming, Nicholas Wilt, Addison-Wesley 2013
* CUDA by Example _ An Introduction to General-Purpose GPU Programming, Jason Sanders/ Edward Kandrot, Addison-Wesley 2011
* Programming Massively Parallel Processors _ A Hands-on Approach, David B. Kirk/ Wen-mei W. Hwu, Elsevier-Morgan Kaufmann Publishers 2010
|Module offered in: |
[Mon Aug 8 00:10:55 CEST 2022, CKEY=keidppm, BKEY=pi2, CID=PIB-CUDA, LANGUAGE=en, DATE=08.08.2022]