PAVE : Parallel Algorithms Visually Explored, - Plymouth

shrewdnessfreedomSoftware and s/w Development

Dec 2, 2013 (3 years and 6 months ago)

90 views

PAVE : Parallel Algorithms Visually Explored,

A Prototype Tool for the Investigation of Data Structures in

Parallel Numerical Iterative Algorithms


E.J. Stuart and J.S.Weston


Department of Computing Science, University of Ulster

Coleraine, Northern
Ireland, BT52 1SA


Abstract



It is commonly acknowledged that since the advent of
supercomputing, the ability of computer scientists to efficiently
interpret the large masses of data produced has been significantly
reduced. By transforming the masse
s of numbers produced into
intuitive and instructive pictures visualization offers the scientist a
way of seeing the unseen. In this paper a prototype algorithm
investigation tool is introduced and two specific representations of
the tool are discussed in
detail. Subsequently, an iterative
algorithm is investigated using the tool and data extracted from
the algorithm is analysed using visual representations. The
hypothesis proposed as a direct consequence of the visualization
of the data is implemented on a
n array processor and
experimental results are presented for a variety of test cases.


Keywords


Visualization, Numerical data structures,
Parallelism


1. Introduction



The definition of visualization tends to
correspond to the application area in which
it is
being used. This does not infer that one definitive
definition of visualization exists from which
authors select their own "subset" definition, rather
that the definition of visualization is continually
evolving. Bearing this in mind and based upon t
he
definition used by McCormick et al[1], for the
purposes of this paper visualization is defined as

a method of computing which enables
researchers to observe their simulations and
computations, to enrich the process of scientific
discovery and to
foster profound and unexpected
insights.

Visualization is of great importance today,
particularly since the recent advent of parallel
architectures permits the creation of such large
masses of data that even storing the data is
problematic. Further, the ap
plication of traditional
methods of interpretation and analysis becomes
tedious and unproductive. However, analysis of
this vast information using visualization
techniques leads to new insight and understanding
of the phenomena studied. Through the
applica
tion of such techniques it becomes possible
to observe patterns, the existence of which had
never been known or confirmed. The application
areas of visualization include aerodynamics,
engineering, biology, geology, entertainment, data
visualization, art an
d architecture. Applications of
visualization in these areas may be in modeling
and simulation, design, control, analysis, and
exploration.


Visualization techniques can also be used to
facilitate algorithmic development by suppressing
unnecessary detail,

focusing upon trends and
highlighting the anomalies in the behaviour of
algorithms. Further, tool support in the area of
visual algorithm investigation is essential in order
to assist this development. Jern[2] states that
"despite the fact that supercompu
ters can crunch
algorithms beyond human comprehension, the
results are incomprehensible without the tools to
visualise them." Consequently, the purpose of this
paper is to explore the use of a prototype parallel
algorithm investigation tool in the analysis

of
parallel numerical iterative algorithms where the
primary objective is the enhancement or
development of the original algorithms and the
creation of new algorithms.


2. Visualization and Parallel Programming


2


The utilisation of visualization in the a
rea of
parallel program monitoring, investigation and
development is
not

new but is constantly
expanding. The visual programming environment,
PIE[3], developed at Carnegie Mellon University,
is one of the earlier visualization tools used in the
visualizat
ion of parallel programs. PIE is
specifically tailored towards concurrently
executing parallel programs and it provides the
user with a variety of ways of analysing the
performance of the programs' execution. It
visually represents the distribution of comp
utation
across numerous processors allowing analysts to
assess the levels of parallelism exhibited at
individual stages of the execution. Consequently,
fluctuations in the performance of the program are
easily determined and the detection of
programming er
rors is simplified.


Paragraph[4] is an example of a mature
visualization tool which animates message
passing parallel programs. Paragraph is similar to
PIE in that it concentrates on the visual analysis of
the performance of concurrently executing
progra
ms. However, as well as focusing upon the
post
-
processing analysis of events, Paragraph also
provides the user with the ability to observe a
visual replay of the events which occurred during
the execution of a concurrent program. It provides
many different

views of the data including
processor utilisation views, concurrency profile
and utilisation summaries. Many other
visualization tools have been developed for use in
this area of parallel program visualization. For
example, Pavane[5] is a visualization en
vironment
concerned with exploring, monitoring, and
presenting concurrent computations, and
VISTOP[6] is a tool for the visualization and
animation of message
-
passing programs which
has been integrated into the TOPSYS[7] parallel
programming environment.


A large proportion of the effort expended in
parallel program visualization is directed at
performance measurement and load balancing
analysis. However, the software tool PF
-
View[8]
offers a different approach to program
visualization by graphically anima
ting the
behaviour of parallel Fortran programs. The
emphasis of this tool is not the performance
statistics of the program but the "correctness" of
the code. PF
-
View is specifically targeted towards
scientific application programmers primarily as a
tool f
or error detection and code verification.


In many ways the introduction of visualization
into new application areas is impeded by the
initial effort required to create the tool to be used.
This problem prompted the development of
PARADISE[9], a meta
-
tool
environment which
has been developed for generating custom visual
analysis tools for parallel programs. With
PARADISE the user creates a tool at a higher
level of definition, thus eliminating much
programming effort and allowing the user to
concentrate upo
n the analysis of the data as
opposed to the development of another piece of
specialised visualization software. Research into
the area of visualization shows that the trend in
software visualization tools is moving away from
application specific software
towards highly
flexible visualization environments called
application builders.


As the amount of research expended in
visualization increased during the eighties and the
early ninties, so too did the need for classification
of visualization software. Earn
shaw and
Wiseman[10], by no means pioneers in the field
of visualisation, identified three categories of
current and future visualization software:

1.

Graphics libraries and presentation packages
.
These are traditionally used methods of viewing
and analysi
ng data, and are considered to be time
consuming to use but highly flexible. Examples
include the UNIRAS subroutine libraries, GKS
and PHIGS.

2.

Turnkey visualization applications.

This type
of software generates the main program and the
graphical user in
terface but requires the user to
supply the data and computational instructions.
Examples include Data Visualiser (Wavefront),
VoxelView (Vital Images) and SunVision (SUN).

3.

Application builders
. These are a hybrid of the
previous two types of system in
which the user
provides the execution path of the program, the
data, and optionally user
-
defined modules.
Examples of application builders include the
3

much cited visualization environments AVS[11],
apE[12] and Khorus[13].

Observe that the choice of visual
ization software
used to develop applications is very important as
it directly affects the development time of the
application.


3. PAVE : A Prototype Tool



PAVE,
P
arallel
A
lgorithms
V
isually
E
xplored, is a prototype tool developed to enable
efficient
investigation of the behaviour of parallel
numerical iterative algorithms. It was developed
using PVI's PVWave Command Language, an
application builder which provides a sophisticated
graphical user interface to system library routines
and user
-
written modu
les. This application builder
was chosen since it provides the user with a large
amount of flexibility when desired and a well
-
defined structure otherwise.


Wolfram[14] states that "
Mathematical
experiments carried out by computer can often
suggest conjec
tures that are subsequently
established by conventional proof
". This is the
fundamental concept upon which PAVE is based.
The software was developed to allow the
algorithm developer to determine how the
components of selected data structures change
throug
hout the execution of a parallel iterative
algorithm. Subsequently, the developer can
implement enhancements of the algorithm based
upon these observations. The primary objective of
this paper is to demonstrate the productivity of
investigating conjecture
based upon visual
observation.


It is quite common to expend a lot of effort in
the production of software which produces
sophisticated three
-
dimensional images in real
-
time. Stallings[15] raises an interesting aspect of
visualization when he asks "Is Bea
uty Also
Truth?" He highlights the fact that directors at
NCSA are aware of the "potential for
misinterpretations and inaccuracies" regarding the
visualization of data. This highlights the fact that
there are benefits to "keeping things simple".
Consequent
ly, PAVE emphasises the meaning of
the data and not the graphics capabilities of the
computer system. Observe also that a more
sophisticated representation requires greater
computation time and is likely to be less intuitive.


It is important to realise th
at a single view of a
data set is insufficient to present all the
information contained within the data.
Consequently, it is necessary to provide the user
with several representations and views of the
same data. In an attempt to satisfy this criteria,
PAVE
, provides different representations of data
sets where the data sets are typically sequences of
vectors or sequences of matrices. The user may
view the data in
post
-
processing animated format
,
to allow investigation and enquiry at each stage of
the iterat
ive algorithm, or in
historical summary
format
, where the overall characteristics of the
algorithm are represented by summarising the
step
-
by
-
step details. In addition to this, each
format may have several views. In this way the
data can be examined in n
umerous ways within
different frameworks in order to derive as much
information as possible from the data set.


3.1 PAVE Representations



Prior to the implementation of PAVE, a pilot
study, which focused on the representation and
analysis of sequences of

vectors, was carried out
to assess the viability of using visualization in the
area of algorithm investigation[16]. Consequently,
although PAVE was designed for the investigation
of both vector and matrix data sets, this paper
reports only on the represen
tations and analyses of
sequences of matrices.


In particular two representations of sequences
of

matrices, 'Animated Action' and 'Row/Col
Thresholding', are described where the former is
in post
-
processing animated form and the latter is
in historical summary format. These are then used
in an analysis of POTS[17], an algorithm for the
computation of
the partial eigensolution of a given
symmetric matrix.


Animated action

is a representation which
allows the user to observe a visual representation
of the data structure at each iteration. The values
of each of the elements of the data structure are
col
our coded and a legend at the right hand side of
4

the representation defines the numerical range for
each colour. Additionally, an animation of the
representation is also available to enable each
iteration to be viewed in sequence. The object of
this repres
entation is to enable the user to observe
the changes that take place in the data structure
thereby building up a mental image of the overall
behaviour of the data structure as the algorithm
iterates.


An important use of this representation when
used with

sequences of matrices is to analyse the
behaviour of a matrix which iterates to a
predefined form. An example of this type of
matrix exists in Jacobi's algorithm for the
computation of the eigenvalues of an Hermitian
matrix,
A
. In this case a sequence of

plane
rotations is applied to
A

with the property that
each new
A

is more diagonal than its predecessor.
Termination of the process occurs when the off
-
diagonal elements of the computed matrix are
sufficiently small to be declared zero. However,
the char
acteristics of the algorithm are such, that
the zero elements computed at the jth

iteration do
not necessarily remain zero in the (j+1)th
iteration. Hence the positions of zero elements
can "fluctuate" between iterations and it is this
behaviour that make
s Jacobi an interesting
algorithm to investigate using PAVE. Observe
also that the Jacobi algorithm is inherently parallel
and has been implemented on many parallel
architectures.



1..22
23..30
1..22
23..30
Matrix Columns
Matrix
Rows





Figure 1



Figure 1 sho
ws an 'Animated Action'
representation of a square matrix of order 30, in
which all of the elements of the leading principal
submatrix of order 22 are colour coded light grey,
thus indicating that their numerical values are less
then 0.5*10
-
12
. Note that
colours other than grey
are represented as black in this figure.


The 'Animated Action' representation is used
to give insight to the overall trends in any data set
and as such it can be considered to be the initial
stage of the analysis process. Since
all

the
elements of the data structure are visualized in
each iteration, specific and accurate analysis of
patterns and anomalies becomes awkward.
Consequently, another format of representation,
complimentary to the 'Animated Action'
representation was devel
oped.


The second type of matrix representation
within PAVE, is the set of "Thresholding"
representations, created in historical summary
format. In a thresholding representation each
matrix in the sequence is partitioned into
corresponding subsets, thereby

enabling the
matrix to be represented at each iteration by a
vertical array of icons, where each icon
corresponds to one subset. The colour of an icon
is used to determine whether or not
all
of the
elements in the particular subset are greater or less
tha
n a predetermined value. The importance of
this type of historical format representation arises
from the fact that all of the information used in the

creation of hypotheses from analysis of trends
may be found in one diagram.


Row/Col partitioning is typic
al of the partition
schemes available in PAVE.


Row/Col
subsets
5
10
15
20
25
30
30
25
20
35
40
15
Iterations





Figure 2


5


In this scheme the ith subset of a given matrix of
order n is defined to be the set of elements
contained in the ith row and column of the leading
princ
ipal submatrix of order i, 1

i

渮n


F楧畲u ㈠獨潷猠a '周Te獨潬摩湧' 牥灲p獥湴n潮
潦瑨攠R潷⽃潬灡牴楴r潮楮 潦a 浡瑲楸 潦潲摥爠
㌰3 睨w牥 瑨攠 c潬潵爠 潦瑨攠 楴栠 楣潮i 楮i 瑨攠 橴栠
c潬畭渠 摥瑥浩湥猠 睨w瑨敲 潲 湯琬

a琠 瑨攠 橴栠
楴e牡瑩潮潦瑨攠a汧潲楴桭Ⱐa汬 潦瑨攠e汥浥湴猠潦
瑨攠 楴栠 獵扳整s 潦 瑨攠 浡瑲楸 a牥 汥獳l 瑨慮 a
灲p摥瑥浩湥搠 癡汵攮l 周畳Ⱐ 景 exa浰meⰠ a琠 瑨攠
ㄷ瑨1楴e牡瑩潮Ⱐ獵扳整猠1
-
ㄲ1a湤n獵扳整sㄴ1a牥 汥獳l
瑨慮⁴桩⁰牥摥瑥浩湥搠癡汵攮

乯瑥 瑨慴浡獫楮m
晡c楬楴楥猠a牥 a癡楬a扬攠景 a汬
牥灲p獥湴慴楯湳n


4. The POTS Algorithm



When used to compute the ‘m’ numerically
largest eigenvalues and their associated
eigenvectors, of a real symmetric matrix
A
, of
order n, POTS may be described as follows:

Let
D
m

be the real diagonal matrix, of order m,
whose diagonal components yield the required
subset of eigenvalues and let

Q
m

be the (n

洩m
潲瑨潮潲浡氠浡瑲楸 睨潳w c潬畭湳n牥灲p獥湴n瑨攠
a獳潣楡瑥搠 e楧e湶nc瑯牳⸠ Le琠
U
0

be an (n

洩m
潲瑨潮潲浡氠浡瑲楸 睨潳w c潬畭湳na牥 瑨攠晩獴s洠
c潬畭湳n潦瑨攠畮楴 浡瑲楸 潦潲摥爠渮n䝥湥牡瑥a
獥煵q湣e 潦e楧e湶nc瑯爠a灰牯p業a瑩潮猠筕
k
} as
follows:

B
k

= (
U
k
)
T

. (
A

.
U
k

)

; 0


欠k⁸

⠱(

U
k+1
=
ortho(
A
.
U
k
.
transf
orm
(
B
k
)), 0

欼x

⠲(

U
k+1

=
ortho
(
A
.
U
k
), x



⠳(

睨w牥 x 楳i 瑨攠 浩湩n畭u 癡汵攠 獵s栠 瑨慴
B
k

is
diagonal. It follows that

lim
k



U
k
=
Q
m

and
Q
m
T
.
A
.
Q
m
=
D
m
.

(4)

The function
ortho

is an extension of the Gram
-
Schmidt orthogonalisation process[18] and the
function
transform
returns a non
-
orthogonal
matrix
T
k
, of order m, the columns of which
represent approximations to the eigenvectors of
B
k
.


Clearly, the algorithm consists of two distinct
cycles where the termination of the primary cycle
and the beginning of the secondary cycle is
signalled by the diagonalisation of the matrix
B
k
.
One of the main objectives of the primary cycle is
the creation

of a good set of eigenvector
approximations which is subsequently used as
input to the secondary cycle, the convergence of
this cycle being dependant upon the accuracy of
its input. Convergence of the secondary cycle is
deemed to have occurred when the di
fference
between the modulus of successive eigenvector
approximations is less than a predefined tolerance
level. Further, if

i
, 1

i

測n a牥 瑨攠e楧e湶n汵敳l潦
A

睨w牥



1

|




2

| .
..




m
-
1

|




m

| ...




n
|

(5)

then the overall rate of convergence of the
algorithm is determined by the ratio |

m+1
|/|

m
|.
Observe that the smaller the convergence ratio,
the
better the overall convergence characteristics
and that as this ratio approaches unity the
algorithm becomes very inefficient.


One of the data sets within POTS which
exhibits an informative convergent nature is the
sequence of real symmetric matrices {
B
k
}
. This
sequence converges to diagonal form, thereby
signalling the transition from cycle one to cycle
two. Since this data structure plays a crucial role
in determining the completion of the primary
cycle of POTS, an investigation of its
convergence charac
teristics was carried out using
PAVE. During the course of this investigation
partial eigensolutions were computed for a
collection of matrices whose components were
generated at random to lie in the range (
-
100,100).
In many instances the subsets of eigen
values
evaluated contained close and equal eigenvalues,
eigenvalues equal in modulus but opposite in sign,
and well separated eigenvalues.

Questions addressed included the following:

(i)

Are there cases where the diagonalisation of
the matrix
B
k

requires an exceedingly large
number of iterations, and, if so, what is the
reason for this phenomenon?

(ii)

Is the diagonalisation process random?

(iii)
Is the matrix
B
k

always diagonalised
systematically in sequence; for example do the
off
-
diagonal elem
ents in the ith row and
column of
B
k

become zero before the (i+1)th
row and column?

6

(iv)

Since each iteration in the primary cycle is
significantly more computationally expensive
than each iteration in the secondary cycle, is
there any way that th
e first cycle could be
terminated before the diagonalisation of
B
k
?

Clearly the answers could be determined
mathematically. However, in this instance PAVE
was used to simplify the required analysis and
interpretation of the diagonalisation of
B
k
.


5. Summa
ry of Investigations



Initially the 'Animated Action' visualisation
technique was used in an attempt to discover
general trends in the convergence characteristics
of the matrix
B
k
. It quickly became apparent that,
in general, and regardless of the charact
eristics of
the eigenvalues of the given matrix
A
, the order of
the largest leading principal diagonal submatrix of
B
k

increases as the number of iterations increases.
Further, the increase in order tends to be more
rapid in the earlier stages of the compu
tation.
Thus, in one particular example, the largest
leading principal
diagonal

submatrix of a 30*30
matrix
B
k

was of order 22 at the 38th iteration yet
a total of 166 iterations was required before
B
k

became diagonal. Figure 1 represents an
'Animate
d Action' snapshot of this matrix at the
38th iteration.


Figure 2 is a row/col representation of the same
matrix for iterations 13 through 43 from which it
becomes apparent immediately that the order of
the largest leading principal diagonal submatrix
in
creases as the iteration number increases.
Moreover, the rapid increase in order of this
submatrix in the earlier iterations is very clear,
thus, confirming the trends observed using the
'Animated Action' representation. Observe that
occasionally for some
examples the order of the
largest leading principal diagonal submatrix of
B
k

decreased momentarily as the number of iterations
increased.


It would also appear to be the case that the rate
of increase in the order of the largest leading
principal diagona
l submatrix of
B
k

from order j to
order j+1 is related to the ratio
|

j+1
|/|

j
|.

Thus, for
example, if the third and fourth largest
eigenvalues are numerically close (well separated)
it is to be expected that the number of iterations
required for the order

of the largest leading
principal diagonal submatrix of
B
k

to increase
from 3 to 4 would be relatively large (few). In this
context it was also observed that if a cluster of
eigenvalues was well separated from the others in
the required subset then relativ
ely few iterations
would be required for the order of the largest
leading principal diagonal submatrix to increase
according to the size of the cluster.


Note that when the convergence ratio was poor
the number of iterations required for convergence
was ex
cessive.


6. Algorithmic Development and Numerical
Experience.



In the light of the investigations undertaken, a
new algorithm, TPOTS,
T
opmost
P
arallel
O
rthogonal
T
ransformations for
S
ubsets, was
developed. The principal differences between
TPOTS and POTS

occur in the primary cycle of
each are

(i)

In the former the matrices
U
k

and
B
k

are of
order n

l

and
l

l
, respectively, whereas in the
latter they are of order n

洠 a湤n m

洬m
牥獰sc瑩癥汹Ⱐ
l
>m.

(ii)

The primary cycle of POTS terminates when
B
k

becomes diagonal, whereas in the case of
TPOTS, this cycle terminates when the leading
principal diagonal submatrix of
B
k

is of order m.
In each case diagonality is determine
d when the
modulus of each of the appropriate off
-
diagonal
components is less than a predetermined value,
m
1
, usually taken to be 0.5


-
14
. Thus, TPOTS
attempts to avoid the slow rate of increase in the
order of the largest leading p
rincipal diagonal
submatrix of
B
k

as observed in the latter stages of
the diagonalisation of
B
k

in POTS.


Due to the inherent parallelism in POTS, and
subsequently TPOTS, each of these algorithms are
implemented on the AMT DAP 510, an array
processor with edgesize 32. The language used is
Fortran Plus Enhanced, a language which permits
the programmer to disreg
ard the edgesize of the
7

machine. Real variables are declared to be of type
REAL

㠮8


呥牭楮r瑩潮潦瑨攠獥c潮摡特 cyc汥l楳i摥e浥搠
瑯 桡癥 潣c畲ue搠 睨w渠 瑨攠 浯摵m畳u 潦 eac栠
c潭灯湥湴 潦 瑨攠 摩晦e牥湣e 扥瑷e渠 瑷漠
獵sce獳s癥 e楧e湶nc瑯

a灰牯p業a瑩潮猬
U
k+1

and
U
k
, is less than a predetermined value, m
2
, usually
taken to be 0.5


-
8
. The algorithms are used to
compute partial eigensolutions for a collection of
matrices of various types and the results obtained
are
presented in Tables 1
-
3.


6
12
15
10
0.847
0.800
0.746
0.700
16
44
35
39
76
23
18
10
25.705
29.843
40.444
49.130
75
90
125
150
14
31
25
31
36
4
11
4
21.486
24.334
35.531
46.609

m+1


m
Iters
Cycle 1
Iters
Cycle 2
Total
Time
Iters
Cycle 1
Iters
Cycle 2
Total
Time
POTS
TPOTS
n
m
l
20
30
30
25

Table 1



Table 1 presents the results which are obtained
when POTS and TPOTS are used to compute
partial eigensolutions of a set of matrices where in
each case the convergence ratio for POTS,
|

m
+1
|/|

m
|, is deemed to be good. The times in
columns 6 and 10 are quoted in seconds.


Table 2 presents the results obtained when
TPOTS is used to compute the partial
eigensolutions of another set of matrices where in
each case the convergence ratio for POT
S,
|

m+1
|/|

m
|, is deemed to be poor. In fact, in each
of these cases POTS required more than 150
iterations in the primary cycle, thus making it
more efficient to first solve the complete
eigenproblem and then extract the required subset.
Note that the co
mplete eigensolution may be
obtained using POT[19],[20], an algorithm for the
computation of the complete eigensolution of a
given symmetric matrix.


40
90
100
155
n
20
25
15
30
m
31
60
50
55
l
6.867
34.562
Total time
(s)
82.607
106.538
Iters
Cycle 1
16
15
32
26
Cycle 2
Iters
1
2
3
3

Table 2


The effect of varying the value of
l

for a given
partial eigenproblem is demonstrated in Table 3.


Iters
Cycle 1
29
20
16
17
Iters
Cycle 2
9
2
1
1
Total
Time (s)
61.724
43.005
34.562
67.111
45
55
60
80
l

Table 3


In this case the order of the matrix is 90 and m
equals 25.


7. Conclusions



An analysis of the results presented in Tables
1 and 2 suggests that TPOTS is more efficient
than POTS in all cases and that each cycle of
TPOTS requires fewer iterations than the
corresponding cycle of POTS. Observe that the
time taken for each iteration
of the primary cycle
of TPOTS will always be greater than the time
taken for each iteration of the primary cycle of
POTS. This may be attributed to the extra
computation required for the orthogonalisation
process in the first cycle of TPOTS. Further,
provi
ded that
l

and m are less than the edgesize of
the array processor used, the time required for all
other computations in each iteration remains
approximately the same for both algorithms.
However, the reduction of the number of
iterations taken in the prim
ary cycle of TPOTS
significantly outweighs the additional
computation time. Note that this may not be the
case in a sequential environment.


Table 3 indicates that, up to a certain point, as
the ratio
l
/m increases the execution time of
TPOTS decreases. Th
is behaviour is not fully
understood and further research is required to
determine the optimal value of the ratio
l
/m.


Finally, it can be concluded that PAVE has
been used successfully in the analysis of a parallel
iterative algorithm. When used experime
ntally the
representations of the chosen data structure
provided sufficient insight to enable a new and
more efficient algorithm to be developed. Recall
8

that the emphasis of PAVE is on simple graphics
used cleverly to investigate the behaviour of a
paralle
l iterative algorithm thereby enabling
further development and enhancement. Thus, the
work reported in this paper clearly confirms that
conjecture based upon visual observation can be
highly productive.


Acknowledgements



This research was funded by the
Department
of Education for Northern Ireland. The software
development was carried out using the facilities of
the Parallel Computer Centre at the Queen's
University of Belfast.


References


[1]

McCormick, B. H., Defanti, T. A., and M. D. Brown,
Visualiza
tion in Scientific Computing, special issue,
Computer Graphics, 21(6), (1987).

[2]

Jern M., Visualisation of scientific data, Computer Graphics
89, Blenheim On
-
Line, 79
-
103, (1989).

[3]

Lehr, T., Segall, Z., Vrsalovic, D., Caplan, E., Chung, A.,
and C. Fin
eman, Visualizing Performance Debugging, IEEE
Computer, 38
-
51,October 1989.

[4]

Heath M. and J. Etheridge, Visualizing the Performance of
Parallel Programs, IEEE Software, 8, 29
-
39 , September
1991.

[5]

Roman, G., Cox, K. C., Wilcox, C. D., and J. Y. Plun
,
Pavane: a System for Declarative Visualization of
Concurrent Computations, Journal of Visual Languages and
Computing, 3, 161
-
193, (1992).

[6]

Bemmerl, T., and P. Braun, Visualization of Message
Passing Parallel Programs with the TOPSYS Parallel
Programmi
ng Environment, Journal of Parallel and
Distributed Computing, 18, 118
-
128, (1993).

[7]

Bemmerl, T., The TOPSYS architecture, In Proceedings of
CONPAR90 VAPP IV, Lecture Notes in Computer Science,
Springer
-
Verlag, Zurich, 457, 732
-
743, (1990).

[8]

Utter
-
Ho
nig, S., and C. Pancake, Graphical Animation of
Parallel Fortran Programs, In Proceedings of


Supercomputing 1991, Albuquerque, N.M., 491
-
499,
November 1991.

[9]

Kohl, J. A., and T. L. Casavant, Use of PARADISE: A
Meta
-
Tool for Visua
lizing Parallel Systems, In Proceedings
of the 5th International Parallel Processing Symposium,
561
-
567, (1991).

[10]

Earnshaw R. A. and N. Wiseman, An introductory Guide to
Scientific Visualization, Springer Verlag, (1992).

[11]

Upson, C. et al, The
Application Visualization System: A
Computational Environment for Scientific Visualization,
IEEE Computer Graphics and Applications, 30
-
42, July
1989.

[12]

Dyer D. S., A Dataflow Toolkit for Visualization, IEEE
Computer Graphics and Applications, 60
-
69, Ju
ly 1990.

[13]

Williams C. S. and J. R. Rasure, A visual language for
Image Processing, In Proceedings of 1990 IEEE workshop
on Visual languages, 86
-
91, (1990).

[14]

Wolfram, S., Computer Software in Science and
Mathematics, Scientific American, 251(3), 14
0
-
151, (1984).

[15]

Stallings, J., Trends in Scientific Visualization: Is Beauty
Also Truth?, Supercomputing Review, (36
-
39), August
1990.

[16]

Stuart, E. J. and J. S. Weston, The Enhancement of Parallel
Numerical Linear Algorithms using a Visualisation
Ap
proach, In Proceedings of the Euromicro Workshop on
Parallel and Distributed Processing, 274
-
281, (1993)

[17]

Stuart, E. J. and J. S. Weston, An Algorithm for the Parallel
Computation of Subsets of Eigenvalues and Associated
Eigenvectors of Large Symmetric

Matrices using an Array
Processor, In Proceedings of the Euromicro Workshop on
Parallel and Distributed Processing, 211
-
217, (1992)

[18]

Stewart, G. W., Introduction to Matrix Computations,
Academic Press, New York, (1973).

[19]

Clint, M. et al., A Compar
ison of two parallel algorithms for
the Symmetric Eigenproblem, International Journal of Comp.
Math., 15, 291
-
302, (1984).

[20]

Weston, J. S. and M. Clint, Two algorithms for the parallel
computation of Eigenvalues and Eigenvectors of large
symmetric matri
ces using the ICL DAP, Parallel Computing,
13, 281
-
288, (1990).