PARALLEL IMPLEMENTATIONS OF OPTIMIZING NEURAL NETWORKS

chickenchairwomanΤεχνίτη Νοημοσύνη και Ρομποτική

19 Οκτ 2013 (πριν από 4 χρόνια και 24 μέρες)

69 εμφανίσεις

PARALLEL IMPLEMENTATIONS OF
OPTIMIZING NEURAL NETWORKS


指導教授:梁廷宇

教授

姓名:紀欣呈

學號:
1095319128

系所:碩光電一甲

ANDREA DI BLAS ARUNJAGOTA

RICHARD HUGHEY

Baskin school of Engineering, University of California
at Santa Cruz Santa Cruz, California


Outline

Introduction

The Optimizing Neural Network

Two Parallel Implementation

Flowchart

Fine
-
grain Kestrel Implementation


“SIMD Phase Programming Model” MasPar
Implementation


Results


Conclusions

Introduction

Hopfield neural network approach to the
maximum clique problem, an NP
-
hard
problem on graphs

One can easily trade execution time for
solution quality

The neural approach does not require
backtracking

THE OPTIMIZING NEURAL
NETWORK


THE OPTIMIZING NEURAL
NETWORK


if vertices i and j are connected
by an edge, otherwise.


The nodes is initialized to

State

The input to node is

Any serial update of the form:

THE OPTIMIZING NEURAL
NETWORK

Minimizes the network energy :



.


a node is picked by a random
roulette
-
wheel selection


probability

TWO PARALLEL
IMPLEMENTATIONS


TWO PARALLEL
IMPLEMENTATIONS


Call find_a_clique()

NO

Yes

Call Normalize_bias()

r to Restart

Next

r<Restart

r=Restart

Flowchart


Flowchart


otherwise

K=0?

return

yes

no

Fine
-
grain Kestrel Implementation

Kestrel is a
512
-
PE

linear SIMD array on a
single PCI board for NT/Linux/OSF
platforms

"classic" fine
-
grain implementation, with
one network node

per PE


largest graph that can be solved with the
current Kestrel system has
512



SIMD Phase Programming Model”
MasPar Implementation

It is a SIMD bi
-
dimensional array with
toroidal wraparound composed of 1K, 2K,
4K
, 8K or 1GK
PEs


The amount of
local memory

in this system
(
64KB per PE
)

Adaptation can also be performed locally
within each PE, but global adaptation has
proved better for a small number of restarts

Result

Kestrel

Maspar

Serial

Runs

20M
Hz

12.5M
Hz

143M
Hz

PEs

512

4096

1

Ram

64K
per PE

256M

Result

Result

Conclusions

The MasPar SPPM implementation offers a more
flexible approach because all available PEs can be
active and contribute to the solution at the same
time

The maximum clique problem has applications in
many fields, and its parallel neural implementation
can extend the scale of problems that can be
efficiently solved


Thank for your attention

Q & A