# Midterm

Λογισμικό & κατασκευή λογ/κού

1 Δεκ 2013 (πριν από 4 χρόνια και 7 μήνες)

93 εμφανίσεις

COSC 5350

Parallel Programming

Summer II

July 23, 2009

Name________________________________________________________________

1.

A. What is meant by “multicore” processors?

B. Why have multicore

processors become the default architecture for most types of
computing equipment?

2.

According to the Cyberinfrastructure Tutor Introduction, what are the two approaches to
parallel programming and what is the main difference between them?

3.

Define precisely in one or two sentences the meaning of each of the following terms:

a.

Problem Decomposition (what are the two types of d
ecomposition commonly used in
parallel

programming)?

b.

SPMD programming

c.

Message Passing

d.

e.

Speedup of a parallel program

f.

Efficiency of a parallel program

g.

Scalability of a parallel program

h.

Ghost cells

i.

Barrier

4.

Performance considerations

a.

What are the components of execution time according to the
Cyberinfrastructure Tutor?

b.

What is Amdahl’s Law and why is it important?

c.

What is the Gustafson
-
Barsis Law?

d.

How do the two laws in parts b and c above differ in their analysis of
speedup?

e.

What is the Karp
-
Flatt metric?

f.

An application r
unning on 10 processors
spends 5
% of its time in serial code.
What is the scaled speedup of the application?

Show work.

g.

A parallel program executing on 32 processors spends 5% of its time in
sequential code. What is the scaled speedup of this program?

Show work.

h.

An
oceanographer gives you a serial program and asks you h
ow much faster it
might run on 20

processors. You can only find one function amenable to a
parallel solution. Benchmarking
on a single processor reveals 60
% of the
execution time is spent inside this f
unction. What is the best speedup a parallel
versi
on is likely to achieve on 20

processors?

5.

Questions on MPI

a.

MPI uses
three pieces of information to characterize the message body in a flexible
way in point to point communications. What are those 3

pieces of information?

b.

What is a “communicator”?

c.

What are the main advantages of using the collective communication
routines over building the equivalent operation out of point
-
to
-
point
communications?

d.

What is the difference between the MPI
operations?

e.

What is meant by the “process rank”?

f.

What is meant by a “virtual topology” in MPI and what are the two virtual
topologies supported by MPI?

g.

What is an MPI handle? Answer in one sentence.

h.

What is

MPI_COMM_WORLD?

i.

In point
-
to
-
point communications, do the sender and receiver usually
communicate synchronously or asynchronously?

j.

What are the four parts of an MPI message envelope?

Explain in one sentence
what is the function of each of the parts?

k.

How does a receiving process determine which message to receive?

l.

What is the purpose of the status argument in MPI_RECV? What information
is contained in that argument?

m.

MPI_SEND and MPI_RECV block the calling processes. Neither
returns until the
communication operation it invoked is completed. What is meant by the word
“complete” in each of these two functions?

n.

What is the difference between blocking and non
-
blocking SEND and RECV
messages in MPI?

o.

Posted MPI_ SENDs and
MPI_RECVs must be completed
. MPI provides
both blocking and nonblocking completion routines
. What are these routines
called, respectively?

p.

What is meant by “non
-
contiguous data” in MPI? How can we send and

6.

Write the code in C for a

modified version of the “Hello World” program
where
only
each
even numbered
processor prints its rank as well as the total number of processors in the
communicator MPI_COMM_WORLD.

7.

Explain how virtual topologies can be used
to parallelize matrix transposition.

8.

Explain the process of recursive halving and doubling used in MPI?