Theory Exam

compliantprotectiveΛογισμικό & κατασκευή λογ/κού

1 Δεκ 2013 (πριν από 3 χρόνια και 8 μήνες)

103 εμφανίσεις

4003
-
531/4005
-
735



Parallel Computing I

Theory Exam

Spring 20073


NAME: _____________________________________________



You may refer to your textbook and course notes while you work on this exam.
You are
not

allowed to discuss any question on this exam

with anyone

until I
have graded your paper. The answers you write on this exam must be your own;
they may not be copied or paraphrased from any source. If any of these rules
are violated you will receive an F on this test.


You must submit a printed cop
y of your responses no later than
8:00
pm on
Thursday, April 24
, 200
7
. No excuses will be accepted for late submissions.



Question 1: (5 points)


Which technological development of the last decade is chiefly responsible for the
current practicality of pa
rallel computers? Which technological development is
chiefly responsible for distributed computing? Explain your answers.



Question 2: (5 points)


In class we discussed two definitions for speed up. Briefly describe each one.
Explain how the second f
orm can lead to super linear speedup.



Question 3
: (5 points)


MPI was characterized as SPMD (Single Program Multiple Data) computing.
What is SPMD computing and how is it different from the SIMD computing?



Question 4
: (5 points)


Distributed parallel

computing systems typically consist of processing units that
are connected together using a standard LAN technology (i.e. Ethernet), whereas
in traditional parallel computing systems the processing elements are typically
connected together using some sort

of internal data bus. Concentrating only on
the communication betw
een the processing elements

give one advantage and
one disadvantage of each approach. Justify your answers.

Question 5
: (5

points)


We have discussed both shared memory and distributed
memory parallel
systems in class. Which of these systems would you expect to be more scalable,
and which of these systems would you expect to have lower communication
costs? Explain your answers.



Ques
tion 6
: (5

points)


MPI provides scatter and gather

communication primitives. The term
scatter

is
used to describe sending each element of an array of data to separate
processes. A scatter sends the contents of the
i
th location of an array to the
i
th
processor (the sender also receives a data element).
The term
gather

is used to
provide a way in which one processor can collect individual values from a set of
processes. Gather is normally used to collect the results of a computation
performed by a group of processors (gather is essentially the opposite o
f scatter).


Describe how you might use scatter and gather operations to multiply two
matrices in parallel.




Question 7
: (5

points)


The standard send in MPI is described as being “either synchronous or buffered”.
Exactly what does this mean? What cou
ld you do in a program that
communicates using standard sends that could cause problems?

Question 8
: (5 points)


Suppose that an MPI_COMM_WORLD consists of the three processes 0, 1, and
2, and suppose that the following code is executed:


#include <stdio
.h>

#include <string.h>

#include <mpi.h>


main ( int argc, char** argv ) {


int myRank, x, y, z;


MPI_Status status;



MPI_Init( &argc, &argv );


MPI_Comm_rank( MPI_COMM_WORLD, &myRank );



switch( myRank ) {


case 0:


x = 0; y = 1; z = 2;


MPI
_Bcast( &x, 1, MPI_INT, 0, MPI_COMM_WORLD );


MPI_Send( &y, 1, MPI_INT, 2, 43, MPI_COMM_WORLD );


MPI_Bcast( &z, 1, MPI_INT, 1, MPI_COMM_WORLD );


break;


case 1:


MPI_Bcast( &x, 1, MPI_INT, 0, MPI_COMM_WORLD );


MPI_Bcast( &y, 1, MPI_INT,
1, MPI_COMM_WORLD );


break;


case 2:


MPI_Bcast( &z, 1, MPI_INT, 0, MPI_COMM_WORLD );


MPI_Recv( &x, 1, MPI_INT, 0, 43, MPI_COMM_WORLD, &status );


MPI_Bcast( &y, 1, MPI_INT, 1, MPI_COMM_WORLD );


}



MPI_Finalize();

}


What are the values
of x, y, and z on each process after the code has been
executes? You must explain your answer.