condor

spiritualblurtedΤεχνίτη Νοημοσύνη και Ρομποτική

24 Νοε 2013 (πριν από 3 χρόνια και 10 μήνες)

110 εμφανίσεις

Parallel Processing




Table of Content :

TaGIDO LA SHE HI



1.

Literature
Review

1.1

High
-
Throughput Computing

For ma
n
y experimental
scientists, scientific progress and quality of research are strongly linked to
computing
throughput. They concerned for how many floating point operations per week or per
month they can extract from their computer environment, rather than the number of such operation
environment can provide them

p
er second or per minute.
High

Performanc
e
Computing (HPC)
environments

deliver a tremendous

amount of computational power over a short period of time. HPC
environments are often

measured in terms of Floating Point Operations Per Second (FLOPS)
. On the
other hand, High Throughput Computing
(HTC)
ca
n deliver large amount of processing capacity over
very long periods of time. The key to HTC is effective management and exploitation of all available
computing resources
(
Livny

M., 1997)
. This refers to Grid
Computing.

1.1.1

Grid
Computing

The concept of grid
computing started as a project to link supercomputing sites, but now it has grown
far beyond its original intent. In fact, there are many several applications that can benefit from the
grid infrastructure, including collaborative engineering, data explorat
ion, high throughput computing,
and of course distributed supercomputing

(Baker M, Buyya R and Laforenza D., 2000).

grid can be defined as a large
-
scale geographically distributed hardware and software infra
-
structure
composed of heterogeneous networked r
esources owned and shared by multiple administrative
organizations which are coordinated to provide transparent, dependable, pervasive and consistent
computing support to a wide range of applications. These applications can perform either distributed
compu
ting, high throughput computing, on
-
demand computing,data
-
intensive computing,
collaborative computing or multimedia computing

Miguel L. Bote
-
Lorenzo, Yannis A. Dimitriadis, and
S´anchez E.G. (2004)
.

Computational Grids are becoming attractive and promisin
g platforms for

solving large
-
scale (problem
solving) applications of multi
-
institutional

interest.
(
Buyya R, Giddy J and David A.
, 2000
).
Condor

platform

for example
allow users to harness multi
-
domain resources as if they all belong to one
personal
domain
.
The user defines the tasks to be executed; Condor

handles all aspects of discovering
and acquiring

appropriate resources, regardless of their location;

initiating, monitoring, and managing
execution on those

resources; detecting and responding to f
ailure; and

notifying the user of
termination. The result is a powerful

tool for managing a variety of parallel computations in

Grid
environments

(
Frey J, Tannenbaum T, M Livny, Foster I, and Tuecke S
,
2002)
.

1.2

CONDOR

Condor is a software system that create
s a High
-
Throughput Computing (HTC) environment
.
Condor is
a specialized batch system

for managing compute
-
intensive jobs. Like most batch systems, Condor
provides a queuing mechanism, scheduling policy, priority scheme, and resource classifications. Users

submit their compute jobs to Condor, Condor puts the jobs in a queue, runs them, and then informs
the user as to the result.

By using
Condor, user submits his computation jobs to Condor, Condor first puts the jobs in a

queue,
then runs them in order, and
finally, informs the user as to the result. Condor's

uniqueness is in being
able
to effectively utilize non
-
dedicated computers to run jobs.

Condor can recognize idle computers
according to keyboard activity, load average, active

telnet users, etc (
Condor
Team
, 2008
)
.

Figure xx

:
The Condor Kernel


1.2.1

ClassAds and Matchmaking

Condor provides powerful resource management by match
-
making resource

owners with resource
consumers. This is the cornerstone of a successful HTC environment. Other compute cluster resource
management systems attach properties to the job queues themselves, resulting in user confusion
over which queue to use as well as admini
strative hassle in constantly adding and editing queue
properties to satisfy user demands. Condor implements

ClassAds, a clean design that simplifies the
user's submission of jobs

(
Condor Team
, 2008
)
.

The basic idea of matchmaking is simple: Entities which

provide or require a service

advertise their
characteristics and requirements in classified advertisements (
ClassAds
). A

designated matchmaking

service (matchmaker) matches
ClassAds
(
Raman R
.
, Livny M.,

and
Solomon M
.,
2000
).

All machines in the Condor p
ool advertise their resource properties, both static and dynamic, such as
available RAM memory, CPU type, CPU speed, virtual memory size, physical location, and current load
average, in a

resource offer

ad.

A user specifies a

resource request

ad

when submi
tting a job. The
request defines both the required and a desired set of properties of the resource to run the job.
Condor acts as a broker by matching and ranking resource offer ads with resource request ads,
making certain that all requirements in both ad
s are satisfied. During this match
-
making process,
Condor also considers several layers of priority values: the priority the user assigned to the resource
request ad, the priority of the user which submitted the ad, and desire of machines in the pool to
ac
cept certain types of ads over others

(
Condor Team
, 2008
)
.

Figure 1

:
.
ClassAd describing a
Work Station


Figure
2

:
.
ClassAd describing a
Submited Job

(
Raman R
.
, Livny M.,

and
Solomon M
.,
2000
).

1.2.2

Pool Architecture :

Every machine in a Condor pool can serve a variety of roles. Most machines serve more than one role
simultaneously. Certain roles can only be performed by single machines in your pool.

These roles are:



Central Manager

There is
only
one

central manager
in condor
. The
machine is the collector of information, and the
negotiator between resources

and resource
requests and will performed the match between
them.
This
machine

should be reliable

machine
.
If this machine crashes, no further matchmaking
can be perf
ormed within the Condor system
.


Execute

An

e
xecute

machine

is responsible for e
xecuti
ng
jobs
.
Any machine in
condor

pool (including your
Central Manager) can be execute

Condor

jobs
.

Being an execute machine
may
require disk
space
.
However, if there isn't much disk space,
Condor will simply limit the size of the core file
that a remote job will drop. In general the more
resources a machine has (swap space, real
memory, CP
U speed, etc.) the larger the
resource requests it can serve.


Submit

Any machine (including your Central Manager)
can be configured
for jobs
submitted

in
condor

pool
.

E
very job

that

submitted in condor

pool

generates another process on
the
submit
machine. So, if
ther is

lots of jobs running,
the
submit machine

will need a fair amount of swap
space and/or real memory.
A machine can be
both a Submit and Execute machine.


Checkpoint Server

There is
only
one

checkpoint server
machine in
condor p
ool.
The checkpoint server is a
centralized machine that stores all the
checkpoint files for the jobs submitted in your
pool. This machine should have lots of disk space
and a good network connection to the rest of
condor

pool, as the traffic can be quite
heavy.

(
Condor Team
, 2008
)
.


Figure
xx

:
Condor Pool Architecture

1.2.3

The Condor Daemons

Condors Daemons implement the functions of the machine roles described above. A daemon is a
computer program that runs in the background
. The major
Daemons

in condor:


Condor Master

This daemon runs on
each machine in the
Condor pool
.

This daemon is responsible for
keeping all the rest of the Condor daemons
running
.

If a daemon crashes or needs to be

updated,

Condor Master will restart it.


Condor Startd

This daemon
must

run on any machine that
executes jobs.

This daemon represents a given
machine to the Condor pool. It advertises
attributes about the machine it’s running on.


Condor Starter

This daemon is start
ed by Condor Startd every
time a job needs to be executed. It sets up

the
execution environment and monitors the job.
Once the job is completed the daemon

sends
back status information to the submitting
computer and exits.


Condor Schedd

This daemon is resp
onsible for submitting jobs
to condor

and m
ust run on machines submitting
jobs.

It manages the job queue (each machine
has one!)



Condor Shadow

This daemon runs on the machine where a
given request was submitted and acts as the

resource manager for the req
uest. In addition,
the Condor Shadow is responsible for making

decisions about the request, such as where
checkpoint files should be stored and how

certain files should be accessed.


Condor Collector

This daemon
runs only on the condor server and
responsible for collecting all the information
about the status of a Condor pool. All other
daemons periodically send

updates to the
collector.


Condor Negotiator

This daemon is responsible for all the match
-
making within th
e Condor system

and
Runs only on
the
condor server. Every
modified

period

t
he negotiator begins a

negotiation cycle. It gathers

all
information about the resources from condor
Collector. It then obtains information from each Condor
Schedd

about jobs that n
eed to be
processed.
After this, t
he Condor Negotiator matches the
resources with the requests
, while consider
user priorities.

The more resources a given user has
claimed, less
priority they have to acquire more resources. If a user with

a better priority has job that
are
waiting to run, and resources are claimed by a user with a

worse priority, the navigator can
preempt that resource and match it

with the user with the better priority.

Figure

xx

: Typical Condor pool with the daemons running


1.2.4

S
teps in running a job using Condor


Code Preparation.

A job run under Condor must be able to run as a
background batch job. Condor runs the program
unattended and in the backg
round. Condor can
redirect console output (stdout and stderr) and
keyboard input (stdin) to and from files for you.


The Condor Universe.

Condor has several runtime environments
(called auniverse) from which to choose. Of the
universes, two are likely choices when learning
to submit a job to Condor: the standard
universe and the vanilla universe.

The standard universe

allows a job running
u
nder Condor to handle system calls by
returning them to the machine where the job
was submitted.
The Standard universe provides
checkpointing

and
remote system calls
.

To use
the standard universe, it is necessary to relink
the program with the Condor libra
ry using the

condor
compile

command.

The vanilla universe

provides a way to run jobs
that cannot be relinked. There is no way to take
a checkpoint or migrate a job executed under
the vanilla universe. For access to input and
output files, jobs must either
use a shared file
system, or use Condor's File Transfer
mechanism.

There is also
Scheduler Universe
.
The scheduler
universe allows a Condor job to be submitted
and executed with different assumptions for the
execution conditions of the job. The job does
no
t wait to be matched with a machine. It
instead executes right away, on the machine
where the job is submitted. T
he job will never
be preempted.


Submit description file.

Controlling the details of a job submission is a
submit description file. The file con
tains
information about the job such as what
executable to run, the files to use for keyboard
and screen data, the platform type required to
run the program, and where to send e
-
mail
when the job completes. You can also tell
Condor how many times to run a
program; it is
simple to run the same program multiple times
with multiple data sets.


Submit the Job.

Submit the program to Condor When your
program completes, Condor will tell you (by e
-
mail, if preferred) the exit status of your
program and various stati
stics about its
performances, including time used and I/O
performed. If you are using a log file for the
job(which is recommended) the exit status will
be recorded in the log file.

Standard universe Tools.

(
Condor Team
,
2008
)

1.2.5

Problem Solvers

A problem solver is a higher
-
level structure built on top of the Condor agent. Two problem solvers are
provided with Condor: master
-
worker and the directed acyclic graph manager. Each provides a unique
programming model for managing
large numbers of jobs.


Master
-
Worker

Master
-
Worker (MW) is a system for solving a
problem of indeterminate size on a large and
unreliable workforce. The MW model is well
-
suited for problems such as parameter
searches where la
rge portions of the problem
space may be examined independently, yet the
progress of the program is guided by
intermediate results
.

The

master itself contains three components: a
work list, a tracking module, and a steering
module.

The work list is simply
a record of all
outstanding work the master wishes to be
done. The

tracking module accounts for
remote worker processes and assigns them
uncompleted work.

The steering module
directs the computation by examining results,
modifying the work list, and commun
icating
with Condor to obtain a su
f
ficient number of
worker processes
.


DAGMan

The Directed Acyclic Graph Manager
(DAGMan) is a service for executing multiple
jobs with dependencies in a declarative form.
DAGMan accepts a declaration that lists the
work to
be done with constraints on the order.
It does not depend on the file system to record
a DAG's progress
.

In DAGMan a JOB statement associates

an
abstract name (A) with a file (a.condor) that
describes a complete Condor job. A PARENT
-
CHILD statement descri
bes the relationship
between two or more jobs. In this script, jobs B
and C are may not run until A has completed,
while jobs D and E may not run until C has
completed. Jobs that are independent of each
other may run in any order and possibly
simultaneousl
y
.

(
Thain D., Tannenbaum T., and Livny M.
,
2004).


Figure xx

:
Structure of a Master
-
Worker
Program


Figure xx
: A Directed Acyclic Graph


1.2.6

Condor at Ben
Gurion University

Condor was first installed at Ben Gurion University (BGU) in 1998 by Dr. Guy Tel
-
Tzur. The installation
was done on the Electrical Engineering (EE) Cluster which, at the time, was made up of 300 MHz
processors. During 2006, the Condor env
ironment grew from 60 to 124 computers. This was
accomplished by adding computers from the departments of Industrial Engineering (IE), Nuclear
Engineering, and Physics, with the majority of the computers coming from IE and EE.

Recently the IE labs were upgraded and therefore do not have Condor installed on them. The Condor
pool is used by a few researchers in the EE department. The awareness of its existence in the IE
department is very low.

1.2.7

implementing

Condor in a hospital


1.3

M
easures for parallel processing




1.4


Introduction to EEG

Electroencephalography

(EEG) is the record
ing of brain electric activity (
Misulis
, 1993).
This neural
activity
of human brain starts between the 17
th

and 23th week of prenatal development

and through

life. it is believed that the electrical signals generated by the brain represent bran function and the
status of the rest of the body

(
Sanei. and Chamber
, 2007)
.


Figure xx

:
An early EEG recording done by Berger

According to
Sanei. and Chamber

(cited from Carton, 1875 and Walter, 1964),
Carlo Matteucci (1811
-
1868) and Emil Du Bois
-
Reymond (1818
-
1896) were the first people register the electrical signals
emitted from muscle nerves

using galvanometer and establish the concept of neurophysiology
.
following their discovery other scientists explore EEG.
The discover of the existence of humen EEG
signals was Hans Berger (1873
-
1941).he began his studies of human EEGs in 1920 (cited from
Massimo 2004 ). Te first report of 1929 by Berger included the al
pha rhythm as the major component
of the EEG signals
(cited from Grass and Gibbs, 1938)
.

EEG researches has brought daily development of clinical, experimental and computational studies for
discovery, recognition, diagnosis, and treatment of vast number of

neurological and physiological
abnormalities of the brain and the rest of center nervous system (CNS) of human beings.
Nowadays,
EEG recording invasively and noninvasively using fully computerized systems.

1.4.1


Source of
EEG Activity

The EEG is generated by
changes in the electrical charge of the membrane of cortical nerve cells. This
nerve cells, like other nerve cells, have
resting potential

which is a difference in electrical potential
between interior cell and the extracellular space. The resting potenti
al fluctuates as a result of
impulses arriving from other neurons at contact point, or synapses, located on the cell body and its
process. Such impulses generate local postsynaptic potential of the cell body
, which
manages the
overall cell function and mai
ntenance
,

and dendrites
, which
receive signals from other neurons.

These
changes may reduce the membrane potential to a critical level at which the membrane loses it charge
completely, generating an action potential of brief duration
which is propagated
along the axon
, which
transmits the
signal to another area
. The fluctuations in the surface EEG are
produce mainly by temporal and spatial summation of
electrical currents caused by relatively slow postsynaptic
potential with a little or none contribution
by the brief action
potential

(
Fisch B.J., 1991
).


Figure xx (right):
Information flow between cells




Figure xx

(above)

:
Neurons
beneath

the CORTEX


Left
: positive potential at the scalp resulting from
excitatory inputs to cortical neurons
predominantly in layer 4 of the cortex.


Right

: negative
potential at the scalp resulting from
excitatory inputs from callosal neurons in the
contra lateral

cortex which terminate in the superficial cortical layers

(
Barlow, 1925,
cited from Martin
,

1991).

Further expansion and illustration on

aforementioned will be found in

A
ppendix 1

-

THE CELLULAR
STRUCTURE OF THE VISUAL CORTEX
.

1.4.2

Action Potential

:

The information transmitted by a nerve is called action potential
(AP). AP is temporary change in the
membrane potential that is transmitted along the axon. APs are caused by exchange irons across the
neuron membrane
, and the AP of most nerves last between 5 to 10 milliseconds.

The membrane potential depolarizes

[
phase
number 2 in Fig xx
]
, becomes more positive, producing a
spike. After spike gets its’ pike, the membrane repolarizes

[
phase number
3

in Fig xx
]
, becomes more
negative.
The potential becomes more negative than the resting potential

[
phase number
4

in Fig xx
]
,
and than returns to normal
, resting state [
phase number
1

in Fig xx
]

(
Sanei. and Chamber
, 200
7).

Figure xx:
Membrane potential during Action Potential

1.4.3

C
linical use
:

According to
Sanei. and Chamber
,
EEG paves the way

for diagnosis

of many
neurological
disorders and many
abnormalities in the human body. It may be used for investigate of the following clinical problems:


Monitoring alertness, comma, and brain death.


Locating areas of damage following head injury, stroke an tumor.


Controlling anesthesia
depth.


Investigating epilepsy and locating seizure origin.


Investigating sleep disorders and physiology.


Investigating mental disorders.

This list is not complete, but it confirm the potential
and motivates for EEG analysis to aid clinical in their
int
erpretation.

1.4.4

Brain Rhythms :


Figure xx

:
Characteristic of wave forms

To analysis EEG, the reader needs to distinguish: 1. Wave form; 2. Repetition; 3. Frequency; 4.
Amplitude; 5. Distribution; 6. Phase relation; 7. Timing; 8. Persistence; and 9. Reactivity. This features
allow

the EEG reader to recognize different patterns
as shown in
Fig xx

(
Fisch B.J., 1991
).


Wave forms :

Any change between 2 recording of
electrodes
is called a wave, regardless its form. Any wave
or sequence of wave is called activity. Regular
wave have fairly uniform appeara
nce [
parts 1
-
2
in Fig xx
]. Irregular waves have uneven shape
and duration [
part

4
in Fig xx
].


Repetition

Repetition of waves may be rhythmical or ar
-
rhythmical. Rhythmical repetitive waves have
similar intervals between individual waves

[
parts 1
-
3

in Fig xx
]
. Ar
-
rhythmical repetitive
wave are characterized by variable, irregular
intervals between individual waves

[
part

4
in
Fig xx
]
.


Frequency

Frequency refers to the number of times a
repetitive wave recurs in one second. The
frequency of repetitiv
e wave can be
determined by measuring the duration of
individual wave, the wavelength [
part

1 in Fig
xx
],
and calculating the reciprocal. Single waves
and complexes may repeat at intervals longer
than wavelength called ‘periodic’, the period
being the time

interval between them [
part

5

in Fig xx
].


Amplitude

EEG amplitude is measured in microvolt [µV]. It
is determined by measuring and comparing the
total vertical distance of a wave
[
part

1 in Fig
xx
]
to the height of a calibration signal
recorded at the sa
me gain and filter settings.


Distribution

Distribution
refers t
o occurrence of electrical
activity recorded by electrodes positioned over
different parts of the head.


Phase relation

Phase relation refers to the timing and polarity
of components of waves in one or more
channels.
Waves of different frequency may
occur in different channels so the troughs and
packs occur at the same time. These waves are
said to be in phase, else they ar
e said to be out
of phase.


Timing

Timing

of waves in different areas of the head
may be similar or different. Waves which occur
at the same time on both sides of the head are
called ‘bilaterally synchronous’. Waves which
occur in different channels without

constant
time relation to each other called
‘asynchronous’.


Persistence

Persistence

describes

how often a wave occur
during recording.


Reactivity

Reactivity

refers to changes which can be
produced in some normal and abnormal
patterns by various maneuver
s.

(
Fisch B.J., 1991
).

1.4.5

Major wave patterns :

On this paper the focus
shortly
will be on 5 wave patterns :
Delta δ
,
Theta θ
,
Alpha α
,
Beta β

and
Gamma γ
.


Delta δ, < 4 Hz

Delta waves

have a frequency

range from 0.5
to 3 or 4 Hz in frequency and 100 to 200 µV in
amplitude. Delta waves are observed when
individuals are in deep sleep or in a coma.



Theta θ, 4
-
7 Hz

Theta waves
have a frequency range from 3
-
4
to 7
-

8 Hz and an amplitude of

50 to 100 µV.
Theta waves are associated with memory,
emotions, and activity in the limbic system.


Alpha α, 8
-
13 Hz

Alpha waves
have a frequency range from 8 to
12

Hz and an amplitude of 30 to 50 µV. Alpha
waves are typically found in people who are
awake

but have their eyes closed and are
relaxing or meditating.


Beta β, 13
-
22 Hz

Beta waves have a frequency range from 13
-
15
to
22

Hz
and an amplitude of
up to

20

µV
. Beta
waves are the ones registered on an EEG when
the subject is

awake, alert, and
a
ctively
processing information.
Normally associate with
wakefulness or active
concentration.


Gamma γ, 22
-
30+ Hz

Gamma waves

have a frequency

range

from

22

Hz

to 30+

Hz

and an
amplitude of
up to

2μV
. Usually
a
ssociate

with higher mental
activity: perception, prob
lem
solving, fear, finger movements
.

(
Misulis
, 1993)

.


Figure xx :
Typical EEG PATTERNS

1.4.6

EEG recording and measurement

According to
Teplan
,
EEG measurements
recording system have 4 components:


Electrodes with conductive media


Set of differential a
mplifiers

(one for each channel)

with filters


A/D converter


Recording device

Electrodes read the signal from the head surface, amplifiers bring the microvolt signals into the range where
they can be
digitalized accurately, converter changes signals from analog to digital form, and personal
computer (or other relevant device) stores and displays obtained data
.

A simple calculation shows that for one hour recording from 128
-
electrode EEG signals sample
s at 500
samples/s a memory size of 128 x60 x 60x 500 x 16 ≈ 3.68 Gbits ≈ 0.45 Gbyte is required. So for more pa
tients
and more time there should be enough storage facilities. Reading EEG formats are easily convertible to
spreadsheets readable by most sign
al processing software packages such as MATLAB
(Sanei. and Chamber
,
2007)


1.5

MATLAB





2.

Reference

2.1

P
art 1.1 :

Livny

M.,
Basney

J.,
Rama R. and

Tannenbaum

T
. (1997).
Mechanisms for

high throughput computing
.
Department of computer science, university of wisconsin
-

Madison.

Baker M, Buyya R and Laforenza D. (2000).
The Grid:

International Efforts in Global

Computing.
Intl.

Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the
Internet (SSGRR'2000),
Italy
.

Buyya R, Giddy J and David A. (
2000
).An Evaluation of Economy
-
based Resource
Trading

and
Scheduling on
Computational Power Grids for

Parameter Sweep Applications
.
School of Computer
Science and Software Engineering and CRC for Enterprise Distributed Systems Technology, Monash
University, Caulfield Campus,
Melbourne, AUSTRALIA
.


Miguel L. Bote
-
Lorenzo, Yannis A. Dimitriadis, and S´anchez E.G. (2004)
.
Grid Characteristics and Uses:
a Grid Definition.

In

Lecture Notes in Computer Scie
nce
,
Grid Computing
, Springer Berlin,

Volume
2970/2004, 291
-
298.

Frey J, Tannenbaum T, M Livny, Foster I, and Tuecke S. ( 2002). Condor
-
G: A Computati
on
Management Agent for Multi
-
Institutional Grids. In
Cluster Computing
,
Springer. Netherlands
,
Volume
5, Number 3
, 237
-
246.

Condor Team
.
(2
008
). "
Condor
User Manual
,
V
ersion 7.0.5

.
Computer Sciences Department,

University of Wisconsin, Madison
.

http://www.cs.wisc.edu/condor/manual/v7.0/ref.html

Raman R.

, Livny M. , Solomon M. (2000
).

Resource Management through Multilateral Matchmaking
.
University of Wisconsin
, Madison

Thain D., Tannenbaum T., and Livny M. (2004). Distributed Computing in Practice: The Condor
Experience
.

University of Wisconsin
, Madison
.

2.2

P
art 1.3 :


Misulis, K.E. (1993).
Essentials of clinical neurophysiology
. Boston [Mass.] :

Butterworth
-
Heinemann.

Sanei s. and Chamber
,

J.A. (2007).
EEG signal processing
. England ;



Hoboken, NJ :

John Wiley & Sons.

Carton R. (1875). The electric currents of the brai
n.
Br. Med. J
.
, 2
, 278.

Wallter

W. G. (1964). Slow potential in human brain associated with expectancy, attention and
decision.
Arch. Psychiat. Nervenkr., 206
, 309
-
322.

Massimo

A
. (2004). In memoriam Pierre Glore (1923
-
2003): an appreciation.
Epilepsia,
45(7)
, 882.

Grass A. M. and Gibbs F.A. (1938). A fourier transform of the Electroencephalogram.

J. Neurophisiol.
1
, 521
-
526.

Barlow

J.S. (1925).
The electroencephalogram :its patterns and origins
. Cambridge, Mass. :

MIT
Press,

C1993, P.148.

Martin J.H. (
1991). The collective electrical behavior of cortical neurons : the electroencephalogram
and the mechanisms of epilepsy.
In Kandel ER, Schwartz JH, Jessell TM (Eds.) Principles of Neural
Science, Prentice Hall International, London
, pp 777
-
791.

Fisch B.J.
, (1991.
Spehlmann's EEG primer
. Amsterdam :

Elsevier.

Teplan, M. (2002). Fundamentals of EEG Measurement.
MEASUREMENT SCIE
NCE REVIEW, Volume 2,
Section 2.



3.

APPENDIX

3.1

APPENDIX 1:

THE CELLULAR STRUCTURE OF THE VISUAL CORTEX


The

primary visual cortex

is the first relay in the visual pathways where information from the two eyes is
combined. In other words, a single cell in this cortex m
ay respond just as much to the stimuli presented to one
eye as to those presented to the other
.

In the visual cortex, the cell bodies of the neurons are divided into six layers that typify the

primate neo
-
cortex
.
In this thin envelope of

grey matter
, about 2 mm thick, the six layers are numbered fr
om I to VI, in Roman
numerals, starting from the outside (the layer in contact with the

meanings
). Each layer is distinguished both by
the

type of neurons

that it contains and by the connections that it makes with other areas of the brain

Layer IV, for example, contains numerous

stellate cells
, small neur
ons with dendrites that radiate out around
the cell body and receive connections from the
lateral geniculate nucleus
. Thus this layer specializes l
argely in
receiving information.

Pyramidal cells

are found in several layers of the visual cortex and are the only type of neurons that
project

axons

out
side it. Each pyramidal cell has one large dendrite, called the apical dendrite, that branches
upward into the higher layers of the cortex, and other dendrites that emerge from the base of the cell. Of
course, each pyramidal cell also has an axon, which ma
y be very long to reach distant areas of the brain. Layers
III, V, and VI contain large numbers of pyramidal cells and consequently serve as output pathways for the visual
cortex.

Layer I contains very few neurons. It is composed of axons and dendrites fro
m cells in the other layers.

With
the development of improved staining methods, some of the six layers in the visual cortex

have now been
classifi
ed into sub
-
layers
.

All cited f
rom

http://thebrain.mcgill.ca/flash/index_d.html

-

http://thebrain.mcgill.ca/flash/d/d_02/d_02_cl/d_02_cl_vis/d_02_cl_vis.html
.