Summary: This document titled “Introduction to Parallel Computing Tutorial” provides a brief overview of parallel computing and is intended as an introduction to the subject. It covers the basics of parallel computing, including concepts and terminology, parallel memory architectures, and programming models. The document also discusses the reasons for using parallel computing, such as saving time and money, solving larger and more complex problems, and taking advantage of non-local resources. It concludes by highlighting the use of parallel computing in science and engineering, industrial and commercial applications, and global applications.
Parallel computers still follow this basic design, just multiplied in units. The basic, fundamental architecture remains the same. (View Highlight)
Note: is the von Neumann architecture the best for AI computing?
Flynn’s taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction Stream and Data Stream. Each of these dimensions can have only one of two possible states: Single or Multiple. (View Highlight)
Note: most common categories: SIMD, MIMD
Single Instruction, Multiple Data (SIMD) (View Highlight)
Multiple Instruction, Multiple Data (MIMD) (View Highlight)
Symmetric Multi-Processor (SMP)
对称多处理器 (SMP)
Shared memory hardware architecture where multiple processors share a single address space and have equal access to all resources - memory, disk, etc. (View Highlight)
Amdahl’s Law states that potential program speedup is defined by the fraction of code (P) that can be parallelized (View Highlight)
Historically, shared memory machines have been classified as UMA and NUMA, based upon memory access times. (View Highlight)