Chapter 1 Introduction

harpywarrenΛογισμικό & κατασκευή λογ/κού

14 Δεκ 2013 (πριν από 3 χρόνια και 7 μήνες)

78 εμφανίσεις


1

OPERATING SYSTEMS


Text Book:

1.

A. Silberschatz and P. B. Galvin, “Operating System Concepts,” Fifth edition.


Reference Book:

1.

M. M. Deitel, “An Introduction to Operating Systems,”.

2.

S. E. Madnick and J. J. Donovan, “Operating Systems,”.

3.

A. S. Tanenbaum, “Mod
ern Operating System,”.

4.

C. J. Date, “An Introduction to Data Base Systems VOLII,”.



Chapter 1 Introduction


Operating System


We view an operating system as the
program

implemented in either
software

or
firmware
, that acts as an intermediary between use
r and hardware to make hardware
usable, and
manage the system’s resources
.

a.

The primary goal is thus to make the computer system
convenient

to use.

b.

A secondary goal is to use the computer hardware in an
efficient

manner.


Device Driver


A special subrouti
ne was written for each I/O device to perform I/O. it knows how
the buffers, flags, registers, control bits, and status bits should be used for a particular
device.


1.

Batch Processing

Jobs were gathered
in groups

or batches
until one period

then run these j
obs.


2.

Turnaround Time

The
interval
between job submission and job completion.


3.

On Line


Device connects to CPU, and CPU controls it directly.


4.

Off line

Device not connects to CPU.



2

5.

Multiprogramming

Several user programs are in main memory
at once (simul
taneously)

and the
processor is
switched rapidly

between the jobs.


6.

Throughput

Work processed per unit time. (The
number of processes

that are completed per
time unit).


7.

Spooling (Simultaneous Peripheral Operation On Line)

Spooling essentially uses the
dis
k as a very large buffer
, for
reading

as far
ahead

as possible on input devices and for
storing output files

until output
devices are able to accept them.


8.

Time Sharing

Use

CPU scheduling

and
multiprogramming

to provide each user with small
portion of CPU
time.


9.

Response Time

Time from the submission of a request until
the first response

is produced.


10.

Real Time

It is desirable to respond quickly and returns the correct result within the defined
constraints.

a.

hard real time: guarantees that critical tasks com
plete on time.

b.

soft real time: get priority over other tasks, and retains that priority until it
completes.


11.

Multiprocessor Systems (Multiprocessing)

Several processors are used on a single computer system, and to increase the
processing power of computer.



are referred to as
Tightly Coupled System
.



Graceful Degradation: The failure of one processor will not halt the system,
but only slow it down.



Fail Soft: System that is designed for graceful degradation.

a.

Symmetric Multiprocessing

Each processor runs an
id
entical

copy of the OS, and these copies communicate
with one another as needed.

b.

Asymmetric Multiprocessing


3

Each processor is assigned a special task. A
master

processor controls the system;
the other processors either look to the master for instruction or

have
predefined tasks.


12.

Distributed Systems (Computer Networks)

User gain access to networks of geographically dispersed computers through various
types of communication lines.



Are referred to as
Loosely Coupled System
.



Chapter 2 Computer System Structu
res


1.

Modern Computer System

The CPU and the device controllers can execute concurrently, competing for memory
cycles. A memory controller is provided to synchronize access to the memory.


2.

Bootstrap Program

To load the OS and to start it executing.


3.

Interru
pt

Interrupt is an
event

that alters the sequence in which a processor executes
instructions.


4.

Polling

One master unit circuit checks the status of one or several slave unit.


5.

Interrupt Priority

A higher priority interrupt will be taken event if a low pri
ority interrupt is active.


6.

Trap


A trap is a

software generated interrupt

caused either an error or by a specific
request from user program.


7.

Synchronous I/O

I/O start, then I/O completion, control is return to the user process.


8.

Asynchronous I/O

Return control to the user program without waiting for the I/O to complete.


4


9.

DMA (Direct Memory Access)

When interrupt CPU, CPU status not saved, it only
delay(idle) CPU processing
until DMA complete. The device controller transfer an entire block of data
to/from its own buffer from/to memory directly.


10.

Cycle Stealing

When I/O request memory access, it may cause a CPU request to be delayed
one
memory cycle period.


11.

Buffering

A buffer is an area of memory for holding data during I/O transfer.


12.

Address Space

The set of all address available to a program.



Chapter 3 Operating System Structures


Process Management

a.

The creation, deletion, suspension, and resumption of processes.

b.

Provide mechanisms for



process synchronization



process communication



deadlock handli
ng


1.

Program

A sequence of instructions (passive entity)


2.

Process

Program in execution. (Active entity)


3.

Job

The collection of activities needed to do the work required.


Memory Management

a.

Keep track of the memory. What parts are used and by whom?

b.

Decide wh
ich processes gets memory, when it gets it and how much.


5

c.

Allocate and deallocate memory.


Device Management

a.

Keep track of the devices, control units.

b.

Decide what is an efficient way to allocate the device.

c.

Allocate and deallocate devices.



buffer caching sy
stem



device driver interface



drivers for specific hardware devices (device driver)


Information Management

a.

Creation and deletion of file systems.

b.

Keep track of the information (location, used, status)

c.

The support of primitives for manipulating file systems
.

d.

The mapping of files into secondary storage.


Networking


A distributed system is a collection of processors that
do not share memory or
a clock.


4.

Command Interpreter (Control Card Interpreter, Command Line Interpreter)

Which is the interface between
the user and the OS. (
high level
)


5.

System Call

Which is the interface between the user and OS. (
low level
)

a.

When system call, there are three general methods are used



registers



pass block address



stack

b.

System call can be grouped into five categories



process

control



file manipulation



device manipulation



information maintenance



communication


6.

Virtual Machine

Layered approach is taken to its logical conclusion in the concept of a virtual machine.

6

For each user program execution on
its own processor

(CPU schedul
ing is
managed by OS) with
its own memory virtually
, and the system support
spooling

facility.


7.

Mechanism

Determine how to do something.


8.

Policies

Decide what will be done.


9. System Generation (Installation)


The system is configured for each specific

computer site.




Chapter 4 Processes


1.

Process

A program in execution.

A process will need certain resources: CPU time, memory, files, I/O devices to
accomplish its task.


2.

Dispatch

Assign CPU to the process.


3.

Long Term Scheduler (Job Scheduler)

Selects pr
ocesses
from disk pool
and loads them into memory for execution.


4.

Short Term Scheduler (CPU Scheduler)

Selects from among the processes that are
ready to execute
, and dispatch one of
them.


5.

Medium Term Scheduler

Remove process from memory.


6.

Context Switch

Switching the CPU to another process.

When an interrupt occurs, the OS
save the status

of interrupted process, routine
control to the
appropriate interrupt handler
, then loading the
saved state

for the

7

new process to execute
.


7.

I/O Bound Process

Spends more

of its time doing I/O than computations.


8.

CPU Bound Process

Use more of its time doing computations than I/O.


9.

Cascading Termination

If a process terminates then all its children must also be terminated.


10.

Cooperating Process

The process can
affect

or
be a
ffected

by the other processes executing in the
system.


11.

Thread

A Light Weight Process (LWP), is a basic unit of CPU utilization, and consists of a
program counter, a register set, and a stack space. It shares with peer threads its
code
section, data secti
on, and operating system resources.


12.

Inter
-
process Communication

IPC provides a mechanism to allow process to
communicate

and to
synchronize

their actions,



Chapter 5 CPU Scheduling


1.

Non
-
preemptive Process

After interrupt processing is complete, it
gets C
PU again
.


2.

Preemptive Process

After interrupt processing is complete, context switch to another process.


3.

Dispatch Latency

The time it takes for the dispatcher to stop one process and start another running.


4.


Waiting Time

The sum of the period spent waitin
g in the ready queue.


8


5.

Shortest Job First (SJF)

Smallest estimated run time

to completion is run next.



Either non
-
preemptive or preemptive.



Minimize the average waiting time (
optimal algorithm
).



It is difficult to implement, because no way to
predict burst

time
.


6.

Preemptive SJF (Shortest remaining Time First) (SRTF)

Running process may be preemptived by a new process with a shorter estimated run
time.


7.

Priority

CPU is allocated to the process with the highest priority. Equal priority processes are
scheduled

in FCFS order.


8.

Aging

Gradually increasing the priority of processes that wait in the system for a long time.


9.

Round robin

Processes are dispatched FCFS but are given a
limited amount

of CPU time.



If the time quantum is very large, RR is the same as FCFS.



If the time quantum is small, RR is called processor sharing. Each of n
processes has its own processor running at 1/n the speed of the real
processor.


10.

Multilevel Queue

Partitions the ready queue into several separate queues.


11.

Multilevel Feedback Queue

T
he multilevel queue scheduling, however allows a process to
move between
queue
.


12.

Multiple processors

Homogeneous processor : Tightly Coupled System.

Heterogeneous processor : Loosely Coupled System.


13.

Deterministic Model

Taken a particular predetermined wor
kload and defines the performance of each

9

algorithm for that workload.



Chapter 6 Process Synchronization


1.

Concurrent Processes

Processes execute concurrently.


2.

Asynchronous Concurrent Processes

Multiple processes be allowed to proceed at
their own pace
,
independent of one
another.


3.

Race Condition

Several processes access and
manipulate

the shared resources concurrently, and the
outcome of the execution depends on the particular order in which the access takes
places.


4.

Mutual Exclusion

When a process
manip
ulates shared resources
, other processes must be excluded
from doing so simultaneously.


5.

Critical Section

The
procedural code

that changes a set of shared resources is called critical section.
When one process is executing in its critical section, no other

process is to be allowed
to execute in its critical section.


6.

Semaphore

A semaphore is a protected variable that can be operated upon only by the
synchronizing primitives P and V.


7.

Spin lock

Busy waiting wastes CPU cycles.


8.

Deadlock

Two or more processes
are waiting indefinitely for an event never occurs.


9.

Monitor

A monitor is an operating systems construct that contains both the
data

and

10

procedures
for handling
process synchronization
. It guarantees
mutually
exclusion

access to
shared critical data
, and p
rovides a convenient mechanism for
blocking and wake up

processes. Many processes may want to enter the monitor at
various times, but
mutually exclusion

is rigidly enforced at the monitor boundary.


10.

Transaction (Atomic Transaction)

A collection of instruct
ions (operations) that performs a single logical function.


11.

Commit

Signal successful termination.


12.

Abort

Signal unsuccessful termination.


13.

Rollback

An aborted transaction must be restored to what it was just before the transaction
started
executing.


14.

Write

Ahead Logging

The system maintains, on
stable storage
, a data structure called the log. Every time
a
change

is made to the data, a record containing the

old and new values

of the
changed item is written to a special data set called the log.

When failure o
ccurs,
REDO

or
UNDO

operations.


15.

UNDO

Restores the value of all data updated by transaction to the old values.


16.

REDO

Sets the values of all data updated by transaction to the new value.


17.

Checkpoint

During execution, the system maintains the write ahead log
. In addition, the system
periodically performs checkpoints.

Step 1: Output all log records currently residing in volatile storage
onto stable storage.

Step 2: Output all modified residing in volatile storage on the stable
storage.

Step 3: Output a log rec
ord <checkpoint> onto stable storage.


11


18.

Serial Schedule

Any execution of transactions one at a time
in any order
.


19.

Serializability (Serializable schedule)

The concurrent execution of transactions must be equivalent to the case where these
transactions
execu
ted serially

in some arbitrary order.

Exclusive locking

can be used to enforce serializability. (too restrictive)


20.

Conflict Serializable

If a schedule S can be transformed into a serial schedule T by a series of
swaps

of
nonconflicting operations
, we say t
hat a schedule S is conflict serializable.


21.

Shared Lock

If a transaction has obtained a shared mode lock, the transaction can read the locked
item, but it can not write.


22.

Exclusive Lock

If a transaction has obtained an exclusive mode lock, the transaction

can both read and
write.


23.

Two phase Locking

a.

Growing Phase: A transaction may obtain locks, but may not release any
lock.

b.

Shrinking Phase: A transaction may release locks, but may not obtain any
new locks.



Two phase locking protocol ensures conflict serial
izability.



It does not ensure freedom from deadlock.


24.

Timestamp Ordering scheme

Transactions can be assigned a unique identifier (timestamp) which can be though of
as the transaction’s start time.

a.

W
-
timestamp(Q): Which denotes the largest timestamp of any
transaction
that executed write(Q) successfully.

b.

R
-
timestamp(Q): Which denotes the largest timestamp of any transaction that
executed read(Q) successfully.



The timestamp ordering protocol ensures conflict serializability.



Ensures freedom from deadlock, bec
ause no transaction ever waits.


12



Chapter 7 Deadlock


1.

Deadlock

The waiting processes will
never again change state
, because the resources they
have requested are
held by other waiting processes
. (Each process in the set
of processes is waiting for an event

that can only be caused by another process in
the set)


2.

Four necessary condition for deadlock

a.

Mutual Exclusion

b.

Hold and Wait

c.

No Preemptive

d.

Circular Wait


3.

Methods for handling deadlocks

a.

Deadlock prevention and deadlock avoidance

Use a protocol to ensure th
at the system will never enter a deadlock state.

b.

Deadlock detection and deadlock recovery

Allow the system to enter a deadlock state and then recovery.

c.

Pretend that deadlocks never occur


4.

Deadlock Prevention

By ensuring that at least one of four necessary
conditions can not hold, we can prevent
the occurrence of a deadlock.

a.

Mutual Exclusion

This condition must hold for non
-
sharable resources. A process never needs to
wait for a sharable resource.

b.

Hold and Wait

Each process to request and be allocated all it
s resources
before it begin
execution
. Before it can request any addition resources, however, it must
release all the resources that it is currently allocated.

Disadvantages:



Resource utilization may be low, since many of the resources may be
allocated but

unused for a long period.



starvation is possible.

c.

No Preemptive


13

If a process that is holding some resources requests another resource that can not
be immediately allocated to it, then all resources currently being held
are
preempted. (released)



It can not

generally be applied to printer, tapes. (CPU register,
memory space is OK.



starvation is possible.

d.

Circular Wait

Each resource type a unique integer number, when process request resources, it
can request only in an increasing order of enumeration.


5.

Deadlo
ck Avoidance

A deadlock avoidance algorithm dynamically examines the resource allocation state to
ensure that there can never be a circular wait condition.


6.

Safe State

If the system can allocate resources to each process in some order and still avoid a
dea
dlock. (
Not all unsafe state are deadlocks)


7.

Banker’s Algorithm

When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need.


8.

Deadlock Detection


Allow deadlock condition, determine if a cir
cular wait exist, if yes, recover the
deadlock.



for single instance of each resource type


a deadlock exists in the system if and only if the wait for graph contains a
cycle.



for several instance of a resource type



when should we involve the deadlock d
etect algorithm

a.

when a request for allocation can not be immediately granted.

b.

at a less frequent intervals.

c.

when CPU utilization drops.


9.

Recovery from deadlock



Process termination

a.

abort all deadlock processes.

b.

abort one process of a time until the deadlock

cycle is eliminated.


14



Resource preemption

Preempt some resources from processes and give these resources to other
processes until the deadlock cycle is broken.

c.

select a victim

d.

rollback

e.

starvation



Chapter 8 Memory Management


1.

Address Binding

a.

A compiler wi
ll typically
bind

symbolic address to
relocatable address.

b.

The loader will in turn bind relocatable address to absolute address.

c.

Dynamic Loading

A routine is not loaded until it is called. Program size may be larger than memory
space.

d.

Dynamic Linking

Linki
ng is postponed until execution time.

e.

Overlay


2.

Swap Out

When insufficient main memory is available, one or more jobs may be removed from
main memory and swapped onto secondary memory.


3.

Swap in

When job has completed, the swap out job may be reload into mem
ory.


4.

Memory Management Strategies

a.

Fetch strategies



demand fetch



anticipatory fetch

b.

Placement

c.

Replacement


5.

Contiguous Storage Allocation

Each program had to occupy a single contiguous block of storage locations.


6.

Noncontiguous Storage Allocation


15

A program
is divided into several blocks or segments that may be placed throughout
main memory in pieces
not necessarily adjacent
to one another.


7.

Single Partition Management

Disadvantages:



some memory is not being used at all.



process wait for I/O



user program bein
g limited to main memory size.




8.

Fixed Partition (Absolute Load)

Main memory was divided into a number of fixed size partitions. Jobs were translated
with absolute translator to run only in a specific partition.


9.

Variable Partition (Not Support Compaction)

Partitions are created during job processing so as to match partition sizes to job sizes.

a.

Coalescing holes

Merge adjacent holes

to form one larger hole.

b.

Fragmentation problem

Total free memory is fragmented into small pieces, but it is not contiguous to
sa
tisfy a memory request.

c.

External fragmentation

When enough total memory space exists to satisfy a request, but it is not
contiguous, storage is fragmented into a large number of small holes.

d.

Internal fragmentation

Memory that is internal to a partition, bu
t is not being used.

e.

How to solve the external fragmentation problem



Relocatable partition (variable partition with compaction)



Paging

f.

Storage placement strategies

The set of holes is searched to determine which hole is best to allocate.



First fit



Best fit



worst fit

Simulations have shown that both
first fit

and
best fit

are better than worst
fit in both time and storage utilization.
First fit

is generally faster.

g.

Disadvantages of variable partition (not support compaction)



fragmentation problem



Single free

area may not be large enough for a partition


16



Memory may contain information that is never used.

10.

Relocatable Partition (Variable partition with compaction)

Moving all occupied area of storage to one end of main storage, collect
a single
large free storage
hole

instead fragmentation.

Advantages:



Eliminate fragmentation



Increase memory and processor utilization

Disadvantages:



Hardware cost



Slow down the speed because compaction



Memory may contain information that is never used


11.

Paging



No external fragmentatio
n



May have some internal fragmentation



Average internal fragmentation is
one
-
half page

per process



Pages typically are either
2 or 4 KB


12.

Associative Mapping

To use a special, small, fast look up hardware cache, variously called associative
registers or tra
nslation look aside buffer (TCBS). Each register consists of two
parts: a key and a value. When the associative registers are presented with an item,
it is compared with all keys
simultaneously
.


13.

Hit Ratio

The percentage of times that a page number is foun
d in the associative registers.


14.

Multilevel Paging

A very large logical address space needs large page table. If we would not want to
allocate the page table contiguously in main memory, we choose multilevel
paging.


15.

Inverted Page Table

Only one page table

in the system, and it has only one entry for each page of physical
memory.


16.

Pure Code Program

Non
-
self
-
modifying code
. If the code is reentrant, then it never changes during

17

execution, it can be shared.


17.

Segment

Divide program according logical function,
there are
different sizes.

(variable
length)



Chapter 9 Virtual Memory


1.

Virtual Memory

Virtual memory is a technique that allows the execution of processes that
may not
be completely in memory
. Program can be
larger than main memory
.


2.

Pager

Swaps a page i
nto memory when the page will be needed.


3.

Page Fault

There is a page reference for which the page needed is not in main memory.


4.

Pure Demand Paging

Never bring a page into memory until it is required.


5.

Chunk Fragmentation

The computer will
rarely reference

all of the instructions or data brought into real
storage.


6.

Optimal Page Replacement

Replace the page that will not be used for the
longest period time
. Unfortunately,
the optimal page replacement is difficult to implement, because it requires future
know
ledge of the reference pages.


7.

LRU (Least Recently Used) Page Replacement

Replace the page that has not been used for the longest period of time. This strategy is
the optimal algorithm
looking backward

in time, rather than forward.


8.

LFU (Least Frequently U
sed) Page Replacement

Keep a counter of the number of references that have been made to each page, and
replaces the page with the smallest count.


18


9.

MFU (Most Frequently Used) Page Replacement

Replace the page with the largest count. The page with the smalle
st count was
probably just brought in and has yet to be used.


10.

Page Buffering Algorithm

When a page fault occurs, a
victim

frame is chosen as before, however, the desired
page is read into a free frame pool before the victim is written out. When the
victim

is later written out, its frame is added to the free frame pool.


11.

Thrashing

If a process does not have enough memory for its working set, it will thrash. (Replace
a page that will be needed again right away. Consequently, it very quickly faults
again, and

again, and again, this very
high paging activity

is called thrashing).



A process is thrashing if it is spending
more time paging than
executing
.



If the degree of multiprogramming is increased, thrashing sets in and CPU
utilization drops sharply. At this p
oint, to increase CPU utilization and stop
thrashing, we must decrease the degree of multiprogramming.



To prevent thrashing, we must provide a process as many frames as it needs.


12.

Locality

Processes tend to reference storage in non
-
uniform,
highly localize
d patterns
.

* If we allocate fewer frames than the size of the current locality, the processes will
thrashing.


13.

Temporal Locality

Storage locations referenced recently are likely to be referenced in the
near future
.


14.

Spatial Locality

Storage referenced ten
d to be clustered, so that once a location is referenced, it is
highly likely that near by
locations

will be referenced.


15.

Working Set

A collection of pages a process is actively referencing.



The working set is an approximation of the program’s locality.



If

working set window too small, it will not encompass the entire locality; if
working set window too large, it may overlap several localities.


19



If the total demand pages are greater than the total number of available
frames, thrashing will occur.



Working set

strategy prevents thrashing while keeping the degree of
multiprogramming as high as possible. Thus, it optimizes CPU utilization.



If the actual page fault rate exceeds the upper limit, we increase number of
frames. If the actual page fault rate falls belo
w the lower limit, we remove
some frames from that process.



Chapter 10 File System Interface


1.

Open file

To copy a specific
file control block

into main memory. The OS keep open file
table containing
information

(file attributes, file pointer …) about all

open files
in the main memory for
fasting search
.


2.

Close file

To
remove its entry
in the open file table.


3.

Block (Physical Record)

A physical record or
block

is the unit of information actually read from or write to a
device.


4.

Logical Record

A logical rec
ord is a collection of data treated as a unit from the user’s standpoint.


5.

Sequential Access

Information in the file is processed
in order
, one record after the other.


6.

Direct Access

A file is made up of fixed length logical records that allow programs to
read and write
records rapidly in
no particular order
.


7.

Volume (Partition, Minidisk)

Volume is used to refer to the recording medium for each particular auxiliary storage
device.


8.

Protection


20

Protect from both physical damage (reliability) and improper acce
ss (protection).
Reliability is generally provided by duplicate copies of files (backup).


9.

Access Lists

Specifying the user name and the types of access allow for each user.



Chapter 11 File System Implementation


1.

I/O Control

Device driver and interrupt h
andlers to transfer information between memory and the
disk.


2.

Basic File System

Issue generic commands to the appropriate device driver to read and write physical
blocks on the disk.


3.

File Organization Module

Translate logical block address to physical blo
ck address for the basic file system to
transfer.


4.

Logical File System

Uses the directory structure to provide the file organization module with the
information the latter needs, given a
symbolic file name.


5.

Consistency checker

When computer crash, the ope
n file table is lost, and with it any changes in the
directories of opened files. This even can leave the file system in an inconsistent
state. A special program is run at reboot time to check for and correct disk
inconsistencies.


6.

Backup

Backup data from
disk to another storage device.


7.

Restore

Restoring the data from backup.




21

Chapter 12 Secondary Storage Structure


1.

Dedicated Device

Device is allocated to a job for the job’s entire duration.


2.

Shared Device

Device may be
shared concurrently

by several proc
esses.


3.

Virtual Device

Some devices that would normally have to be dedicated may be converted into shared
devices through techniques such as spooling.


4.

Seek Time

Boom moves to the appropriate cylinder.


5.

Latency Time

Data rotate from its current position to

a position adjacent to the read write head.


6.

Transfer Time

Actual transfer of data between the disk and main memory.


7.

SSTF (Shortest Seek Time First)

Selects the requests with the minimum seek time from the current head position.


8.

SCAN (Elevator Algorithm
)

The read write head starts at one end of the disk, and moves toward the other end,
servicing requests as it reaches each track, until it gets to the other end of the disk.


9.

C
-
SCAN (Circular SCAN)

A variant of SCAN, when it reaches the other end, however,

it immediately returns to
the beginning of the disk, without servicing any requests on the return trips.


10.

LOOK

Like SCAN, however the head is only moved as far as the last request in each
direction.


11.

Logical Formatting

Write an initial, blank directory on
to the disk, and may install a FAT, I
-
nodes, free

22

space lists, or any other information.


12.

Bad Blocks

Unreadable or unwritable block. The format command finds bad blocks when the disk
is logically formatted, and writes a special value into the FAT entries t
o tell the
allocation routines no to use those blocks.


13.

Disk Striping (Disk Interleaving)

a group of disks is treated as one storage unit, with each block broken into several
subblocks. Each subblock is stored on a separate disk. The time required to
trans
fer one block into memory decrease drastically, since the disks transfer their
subblocks in parallel, fully
utilizing their I/O bandwidth
, or transfer capacity.


14.

RAID (Redundant Array of Inexpensive Disks)



Improve performance, especially the price performa
nce ratio.



Mirroring or shadowing, consists of keeping a duplicated copy of data to
improve reliability.



Block interleaved parity: Provide an extra block of parity data.