CSCE
590/569: Parallel Computing
End of Course
Summary of Objectives
Fall 2004
—
Duncan
B
uell
Catalog Description:
569
—
Parallel Computing
. (3
) (Prereq:
Knowledge of programming in a high

level language; Math
526 or 544
)
Architecture and interconnection of
parallel computers; parallel programming models
and applications; issues in high performance computing;
programming of parallel computers.
Three
lecture hours and two laboratory hours per week. Open to all majors.
Textbook(s) and Other Required Materia
l:
Michael J. Quinn,
Parallel
Programming in C with MPI and OpenMP
, McGraw

Hill, 2004.
Course Objectives:
After completing this course students should
know or
be able to:
1.
The architectures of parallel computers past and present and their features a
nd failings.
2.
The major models for parallel algorithms and programs.
3.
The analysis for parallel programs, both in theory and in practice.
4
.
I
mplement
parallel programs in at least two standard models of parallel programming.
5.
Some of the common co
mputational applications for which parallel computing is necessary
.
Assessment of Learning by Course

Objective:
Summary of Results by Objective and P
ercentage of Students Meeting Course Objectives:
Coursework
Topic
Students
M
eeting Objective
*
Obj
.
1
Obj
.
2
Obj
.
3
Obj
.
4
Obj
.
5
prog 1
82%
82%
82%
prog 2
68%
68%
55%
prog 3
64%
64%
82%
prog 4
86%
82%
82%
prog 5
82%
82%
82
%
82%
Exam
64%
73%
55%
77%
Final
86%
86%
86%
86%
Average
77.3%
77.3%
70.5
75.6
78.0
* Percent out of
2
2 st
udents.
Measurement of
Course Objectives
The objective standard used is 70% for the
programming assignments
.
These
are graded on a scale of 0 to
5
0 points
, with some assignments worth more or less depending on difficulty. Roughly 20% of the grade
is for
documentation, 50% for basic functional correctness, and 30% for effective rather than naïve use of
parallelism.
Thus
,
a standard of 70% implies that
the program is moderately well documented, performs
correctly, and makes
at least
some sensible use of p
arallelism.
An objective standard of
60
% was used for
the exam.
Percent
age distributions are based on 2
2 students
who received
who received grades
for
the class.
Objective 1
.
Know the architectures of parallel computers past and present and their feature
s and
failings.
The lectures and the exams cover the architectures of parallel computers. The programming exercises
expose the students to the features and failings of shared memory and of distributed memory machines.
Objective 2.
Know the major models
for parallel algorithms and programs
.
The course uses
C and MPI or C and OpenMP
for all of the programs.
The programming assignments
measure a practical understanding of these models, and
exams on the lecture material measure an
understanding of the the
ory and its limitations.
Objective 3.
Know how to analyze parallel programs, both in theory and in practice.
The exams and the documentation that accompanies the programming assignments measure a student’s
ability to analyze the complexity of parallel p
rograms.
Objective 4.
Be able to i
mplement parallel programs in at least two standard models of parallel
programming
.
All but one of the programming assignments used C and MPI on a distributed memory machine, and one
assignment used C and OpenMP on a sha
red memory machine. The programming assignments covered
the range of applications as mentioned in Objective 5.
Objective 5.
Know some of the common computational applications for which parallel computing is
necessary
.
Lectures, exams, and programming
assignments cover the topics of matrix multiplication, Monte Carlo
computation, parallel sorting, and numerical problems in science and engineering.
Grade Distribution:
A
27.3%
(6
)
B+
22.7
%
(5
)
B
22.7
%
(5
)
C+
4.5
%
(1)
C
9.1
%
(2
)
D+
0.0
%
(0
)
D
0.0
%
(0
)
F
4.5
%
(1
)
W
4.5
%
(1)
I
4.5
%
(1
)
Total:
100%
(
2
2)
The programming assignments were:
1.
Write a
16

n
ode
MPI program to pass message
s from one node to the next a) in a two

dimensional grid circularly to the right; b) in an off

diagonal wavefront circularly.
2.
W
rite a 16

node MPI program to compute the number of permutations of n items, with n taking
several values, using recursion, and collect both timing and node expansion information.
3.
Write an MPI program to do matrix multiplication, using Cannon’s algorithm,
in a 3x3 grid of
processes, and analyze the computation as it progresses.
4.
Write an MPI program to generate parallel random numbers and use them in a simulated
annealing application.
5.
Write an OpenMP program on an 8

processor SMP platform to compute a 4

matr
ix product both
as functional parallelism at the matrix multiplication level and as thread

level parallelism at the
loop iteration level.
Comments 0
Log in to post a comment