Fast Parallel Computing

shapecartSoftware and s/w Development

Dec 1, 2013 (3 years and 8 months ago)

75 views

Fast Parallel Computing
Well done is quickly done.
Caesar Augustus
Laboratory of Information System , JINR ,
Dubna
Group members

Abeer
Hassan

Hassan
Ayad
T.A. , Physics Dept. , Faculty of Science , Cairo university

Mohammed H.
Attia
T.A. , Medical Research Institute , Alex
. Univ.

Ibrahim Gad
T.A. , MATH. Dept
.
,
Faculty of Science ,
Tanta
university

Hany
Mostafa
Mossad
Lab supervisor, Physics Dept. , American University in Cairo

Moustafa
Rabea
Kaseb
T.A. , info. system
Dept.
,FCI ,
fayoum
university
Laboratory of Information System , JINR ,Dubna
Introduction
Laboratory of Information System , JINR ,Dubna
Why?
What?
How?
Parallel
Computing
Laboratory of Information System , JINR ,Dubna
Why?
Complexity
Speed
Laboratory of Information System , JINR ,Dubna
What?
Parallel
Computing
Computer
Laboratory of Information System , JINR ,Dubna
How?
Programming Language
Fortran, C/C++
Processor
Processor
Processor
Processor
Laboratory of Information System , JINR ,Dubna
Objectives
Laboratory of Information System , JINR ,Dubna
Fast Parallel Computing
in Research

The main objective of the
F
PC
is to promote research by
integrating state
-
of
-
the
-
art high performance computing
technology
with Research
.

The
F
PC help researchers with their experimental
software and hardware needs
.
Laboratory of Information System , JINR ,Dubna
Fast Parallel Computing
in Research

cont.

A
ccelerate innovation and serve the economy, national
security, education, and the environment

So, it’s the researchers task to
develop the code that could
make the high
-
performance machines useful.
Laboratory of Information System , JINR ,Dubna
Course
Objectives
1.
MPI interface is meant to provide essential virtual topology,
synchronization, and communication functionality between a
set of processes (that have been mapped to
nodes/servers/computer instances)
.

MPICH is one of the most popular implementations of MPI. It is used as
the foundation for the vast majority of MPI implementations, including
IBM MPI (for Blue Gene), Intel MPI, Cray MPI, Microsoft MPI
and many
others.
Laboratory of Information System , JINR ,Dubna
Laboratory of Information System , JINR ,Dubna
Course
Objective
1.
MPI interface is meant to provide essential virtual topology,
synchronization, and communication functionality between a
set of processes (that have been mapped to
nodes/servers/computer instances)
.

MPICH is one of the most popular implementations of MPI. It is used as
the foundation for the vast majority of MPI implementations, including
IBM MPI (for Blue Gene), Intel MPI, Cray MPI, Microsoft MPI
and many
others.
Laboratory of Information System , JINR ,Dubna
Course objectives

cont.
2.
Basic Operation of MPI:
A.
E
xchanging data between process pairs (send/receive
operations)
B.
C
ombining partial results of computations (gather and
reduce operations)
C.
S
ynchronizing nodes (barrier operation)
3.
Programming language “Fortran “.
4.
Integration of the “Fortran “ or “C” with MPICH
library.
5.
How to write a Code in parallel
Laboratory of Information System , JINR ,Dubna
Background
Laboratory of Information System , JINR ,Dubna
Parallel Programming Models

There are several parallel programming models in common use:
1.
Shared Memory (Threads)

OpenMP
2.
Distributed Memory

Message Passing
Interface (
MPI
)
Laboratory of Information System , JINR ,Dubna
3.
Single Program Multiple Data (SPMD)

SINGLE PROGRAM: All tasks execute their copy of the same program

MULTIPLE DATA: All tasks may use different data

The SPMD model, using message passing
4.
Multiple Program Multiple Data (MPMD
)
-
MULTIPLE PROGRAM: Tasks may execute different programs simultaneously
-
MULTIPLE DATA: All tasks may use different data
Laboratory of Information System , JINR ,Dubna
Designing Parallel Programs
1.
Understand the Problem and the Program

the first step in developing parallel software is to first understand the problem that you wish to
solve in parallel
.

determine whether or not the problem is one that can actually be parallelized.
Serial Problem
Parallel Problem
Laboratory of Information System , JINR ,Dubna
Designing Parallel Programs
2
. Partitioning

One of the first steps in designing a parallel program is to break the problem into
discrete "chunks" of work that can be distributed to multiple tasks

There are two basic ways to partition computational work among parallel tasks:
domain
decomposition
and
functional decomposition
.

Domain Decomposition
functional
decomposition
Laboratory of Information System , JINR ,Dubna
Basic MPI Functions
Function Purpose
C Function Call
Fortran Subroutine Call
Initialize MPI
int
MPI_Init
(int *
argc
, char **
argv
)
integer
ierror
call
MPI_Init
(
ierror
)
Determine number of processes within
a
communicator
int
MPI_Comm_size
(MPI_Comm comm, int *size
)
integer
comm,size,ierror
call
MPI_Comm_Size
(
comm,size,ierror
)
Determine processor rank within a
communicator
int
MPI_Comm_rank
(MPI_Comm comm, int *rank)
integer comm,rank,ierror
call
MPI_Comm_Rank
(comm,rank,ierror)
Exit MPI (must be called last by all
processors)
int
MPI_Finalize
()
CALL
MPI_Finalize
(ierror)
Send a message
int
MPI_Send
(void *buf,int count, MPI_Datatype
datatype, int dest, int tag, MPI_Comm comm)
<type> buf(*)
integer count, datatype,dest,tag
integer comm, ierror
call
MPI_Send
(buf,count,
datatype, dest, tag,
comm, ierror)
Receive a message
int
MPI_Recv
(void *
buf,int
count,
MPI_Datatype
datatype
, int source, int tag,
MPI_Comm
comm
,
MPI_Status
*status)
<type>
buf
(*)
integer count,
datatype
,
source,tag
integer
comm
, status,
ierror
call
MPI_Recv
(
buf,count
,
datatype
, source,
tag,
comm
, status,
ierror
)
Laboratory of Information System , JINR ,Dubna
Broadcast
Scatter
Gather
Send
Recv
MPI Collective
Laboratory of Information System , JINR ,Dubna
Reduce
AllGather
and
AllReduce
Barrier
Laboratory of Information System , JINR ,Dubna
Applications and uses
The Universe is Parallel
Laboratory of Information System , JINR ,Dubna
Galaxy formation and Planetary
movements
Laboratory of Information System , JINR ,Dubna
Climate Change and weather
Laboratory of Information System , JINR ,Dubna
Industry
Laboratory of Information System , JINR ,Dubna
Computational physics
is the
intermediate branch between
theoretical
and
experimental physics
Laboratory of Information System , JINR ,Dubna
Historically,
parallel computing has been considered to be "the high
end of computing"
And has been used to model difficult problems
in many areas of science and engineering
From astrophysics to High Energy physics
Laboratory of Information System , JINR ,Dubna
HPC and Physics

Applied Physics

Nuclear physics

Particle Physics

Condensed matter

High pressure

Fusion

Photonics

Biophysics

molecular dynamics
Laboratory of Information System , JINR ,Dubna
MPI Tasks and we have done

Exchange data between nodes

Working with vectors

Matrix Processing (matrix
creation,and
multiplications )

Calculating
Clebsch

Gordan
coefficients

Numerical Integration
Laboratory of Information System , JINR ,Dubna
Conclusion

Parallel
Computing reduce
computation time
by distributing jobs through Multiple nodes.

solving physical and mathematical problems
using Fortran
Language is easier than C++
language

Designing
Parallel Programs
is very
challenging
Laboratory of Information System , JINR ,Dubna
Conclusion

Running
Parallel
Programs
on
Linux
platforms is faster and more stable.

We
hope in the next School’s
includes more
programs like
Grid and Cloud
computing
Laboratory of Information System , JINR ,Dubna
Acknowledgement
Special thanks
Dr.
Elena
Zemlyanaya
Dr.
Tatiana
F.
Sapozhnikova
for their efforts
Laboratory of Information System , JINR ,Dubna
Thank you for your attention
Questions ……
Laboratory of Information System , JINR ,Dubna