operating system-ansx

jumentousklipitiklopΛογισμικό & κατασκευή λογ/κού

30 Οκτ 2013 (πριν από 3 χρόνια και 5 μήνες)

70 εμφανίσεις

5mark

1.Systems with virtual memory[
~

source

|
~
]

Virtual memory

is a method o
f decoupling the memory organization from the physical hardware. The applications operate
memory via
virtual addresses
. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual
address to a
physical address
. In t
his way addition of virtual memory enables granular control over memory systems and methods
of access.

Protection[
~

source

|
~
]

~
:
Memory protection

In virtual memory
systems the operating system limits how a
process

can access the memory. This feature can be used to
disallow a
process

to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one
program from interfering with the operation of another.

Sharing[
~

source

|
~
]

~
:
Shared memory

Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share
information. Shared memory is one of the fa
stest techniques for
inter
-
process communication
.

Physical organization[
~

source

|
~
]

Memory is usually classed by access ra
te as with
primary storage

and
secondary storage
. Memory management systems handle
moving information between these two levels of memory.

2.Forms of Auxiliary Memory[
~
]



Flash memory
:

An electronic non
-
volatile computer storage device that can be electrically erased and reprogrammed,
and works without any moving parts. A version of this is implemented in many Apple notebooks.



Optical disc
:

Its a storage medium from which data is read and to which it is written by lasers. Optical disks can store
much more data

up to 6 gigabytes (6 billion bytes)

than mo
st portable magnetic media, such as floppies. There are
three basic types of optical disks: CD
-
ROM (read
-
only), WORM (write
-
once read
-
many) & EO (erasable optical
disks).



Magneti
c Disk
:

A magnetic disk is a circular plate constructed of metal or plastic coated with magnetized material.
Both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each
surface. Bits are stored in

magnetised surface in spots along concentric circles called tracks. Tracks are commonly
divided into sections called sectors. Disk that are permanently attached and cannot removed by occasional user are
called hard disks. A disk drive with removable disks

is called a floppy disk drive.



Magnetic tapes
:

A magnetic tape transport consists of electric, mechanical and electronic components to provide the
parts and control mechanism for a magnetic tape unit. The tape itself is a strip of plastic coated with a magnetic
recording medium. Bits are recorded as m
agnetic spots on tape along several tracks. Seven or Nine bits are recorded to
form a character together with a parity bit R/W heads are mounted in each track so that data can be recorded and read
as a sequence of characters.

3.Time to access data
[
~

source

|
~
]

The factors that limit the
time to access t
he data

on an HDD are mostly related to the mechanical nature of the rotating disks and
moving heads.
Seek time

is a measure of how long it takes the head assembly to travel to the track

of the disk that contains data.
Rotational latency

is incurred because the desired disk sector may not
be directly under the head when data transfer is requested.
These two delays are on the order of milliseconds each. The
bit rate

or data transfer rate (once the head is in the right positi
on)
creates delay which is a function of the number of blocks transferred; typically relatively small, but can be quite long with

the
transfer of large contiguous files. Delay may also occur if the drive disks are stopped to save energy.

An HDD's
Average A
ccess Time

is its average
Seek time

which technically is the time to do all possible seeks divided by the
number of all possible seeks, but in practice is determined by statistical metho
ds or simply approximated as the time of a seek
over one
-
third of the number of tracks.
[81]

Defragmentation

is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas
on the disk.
[82]

Some co
mputer operating systems perform defragmentation automatically. Although automatic defragmentation is
intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress.
[83]

Time to access data

can be improved by increasing rotational speed (thus reducing latency) and/or by reducing the time spent
seeking. Increasing areal density increases
throughput

by increasing data rate

and by increasing the amount of data under a set of
heads, thereby potentially reducing seek activity for a given amount of data. Based on historic trends, analysts predict a fu
ture
growth in HDD areal density (and therefore capacity) of about 40% per yea
r.
[84]

The time to access data has not kept up with
throughput increases, which themselves have not kept up with growth in storage capacity.


20marks

4.Dynamic memory
allocation
[
~

source

|
~
]


Details
[
~

source

|
~
]

The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory request
s a
re
satisfied by allocating portions from a large pool of memory called
the heap
. At any given time, some parts of the heap are in use,
while some are "free" (unused) and thus available for future allocations. Several issues complicate implementation, such as
internal and external
fragmentation
, which arises when there are many small gaps between allocated memory blocks, which
invalidates their use for an allocation request. The allocator's
metadata

can also inflate the size of (individually) small allocations.
This is managed often by
chunking
. The memory m
anagement system must track outstanding allocations to ensure that they do
not overlap and that no memory is ever "lost" as a
memory leak
.

Efficiency
[
~

source

|
~
]

The specific dyn
amic memory allocation algorithm implemented can impact performance significantly. A study conducted in
1994 by
Digital Equipment Corporation

illustrates the
overheads

involved for a variety of allocators. The lowest average
instruction path length

required to allocate a single memory slot was 52 (as measured with an instruction level
profil
er

on a
variety of software).
[1]

Implementati ons
[
~

source

|
~
]

Since the precise location of the allocation is not known in advance, the memory is accessed
indirectly, usually through a
pointer

reference
. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the
kernel
, and may use any of the following methods.

Fixed
-
size
-
blocks allocation
[
~

source

|
~
]

~
:
Memory pool

Fixed
-
size
-
blocks allocation, also called memory pool allocation, uses a
free list

of fixed
-
size block
s of memory (often all of the
same size). This works well for simple
embedded systems

where no large objects need to be allocated, but suffers from
fragmentation
, especially with long memory addresses. However, due to the significantly reduced overhead this method can
substantially improve performance for objects that need fre
quent allocation / de
-
allocation and is often used in video games.

Buddy blocks
[
~

source

|
~
]

For more details on this topic, see
Buddy memory allocation
.

In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of
memory of a
certain
power of two

in size. All blocks of a particular size are kept in a sorted
linked list

or
tree

and all new blocks
that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested th
an is
available, the smal
lest available size is selected and halved. One of the resulting halves is selected, and the process repeats until
the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to av
oid
needlessly b
reaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and
placed in the next
-
largest size buddy
-
block list.

5.Logical block addressing

Logical block addressing (LBA)

is a common scheme used for specifying the location of blocks of data stored on
computer
storage

devices, generally
secondary storage

systems such as
hard disks
.

LBA is a particularly simple linear addressing scheme; blocks are located by an integer index, with the first

block being LBA 0,
the second LBA 1, and so on.

IDE

standard included 22
-
bit LBA as an option, which was further extended to 28
-
bit with the release of ATA
-
1 (1994) and to
48
-
bit
with the release of ATA
-
6 (2003). Most hard drives released after 1996 implement logical block addressing.

CHS conversion

CHS (cylinder/head/sector) tuples can be mapped to LBA address with the following formula:


where,



C, H and S are the cylinder
number, the head number, and the sector number



LBA is the logical block address



HPC is the maximum number of heads per cylinder (reported by disk drive, typically 16 for 28
-
bit LBA)



SPT is the maximum number of sectors per track (reported by disk drive, ty
pically 63 for 28
-
bit LBA)

LBA addresses can be mapped to CHS tuples with the following formula:


where



mod is the
modulo operation
, i.e. the
remainder
, and



is
integer division
, i.e. the
quotient

of the d
ivision.

According to the ATA specifications, "If the content of words (61:60) is greater than or equal to 16,514,064 then the content

of
word 1 [the number of logical cylinders] shall be equal to 16,383."
[1]

Therefore for LBA 16450559, an ATA drive may actually
respond with the CHS
tuple

(16319, 15, 63), and the number of cylinders in this scheme must be much larger than 1024 allowed
by INT

13H.
[3]

OS dependencies[
~

source

|
~
]

Operating systems that are sensitive to BIOS
-
reported drive geometry include
Solaris
,
DOS

and Windows NT family, where
NTLDR

(
NT
,
2000
,
XP
,
Server 2003
) or
WINLOAD

(
Vista
,
Server 2008
,
Windows 7

and
Server 2008 R2
) use
Master boot
record

which addresses the disk using CHS;
x86
-
64

and
Itanium

versions of Windows can partition the drive with
GUID
Partition Table

which uses LBA addressing.

Some operating systems do not require any translation because they do not use geometry reported by BIOS in their
boot loaders
.
Among t
hese operating systems are
BSD
,
Linux
,
Mac OS X
,
OS/2

and
ReactOS
.

6.Types of file systems
[
~ source

|
~
]

File system types can be classified into disk/tape file systems, network file s
ystems and special
-
purpose file systems.

Disk file systems
[
~ source

|
~
]

A
disk file system

takes advantages of the ability of disk storage media to randomly address data in a short amount
of time
. Additional considerations include the speed of accessing data following that initially requested and the
anticipation that the following data may also be requested. This permits multiple users (or processes) access to
various data on the disk without reg
ard to the sequential location of the data. Examples include
FAT

(
FAT12
,
FAT16
,
FAT32
),
exFAT
,
NTFS
,
HFS

and
HFS+
,
HPFS
,
UFS
,
ext2
,
ext3
,
ext4
,
XFS
,
btrfs
,
ISO 9660
,
Files
-
11
,
Veritas File System
,
VMFS
,
ZFS
,
ReiserFS

and
U
DF
. Some disk file systems are
journaling file systems

or
versio
ning file systems
.

Optical discs
[
~ source

|
~
]

ISO 9660

and
Universal Disk Format

(UDF) are two common formats that target
Compact Discs
,
DVDs

and
Blu
-
ray

discs.
Mount Rainier

is an extension to UDF supported by
Linux 2.6 series and Windows Vista that facilitates
rewriting to DVDs.

Flash file systems
[
~ source

|
~
]

~:
Fl ash file system

A
flash file sy
stem

considers the special abilities, performance and restrictions of
flash memory

devices. Frequently
a disk file system can use a flash memory device as the underlying storage me
dia but it is much better to use a file
system specifically designed for a flash device.

Tape file systems
[
~ source

|
~
]

A
tape file system

is a file system and tape format designed to store files on tape in a self
-
describing form.
Magnetic
tapes

are sequential storage media with significantly longer random data access times than disks, posing challenges
to the creation and efficient management of a general
-
purpose file system.

In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file
additions, changes, or removals require updating the directory and the used/free maps. Random access to data
regions i
s measured in milliseconds so this system works well for disks.

Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take
several seconds to several minutes to move the read/write head from one end of th
e tape to the other.

Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing
typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory
to add th
e data, and then advancing the tape to write the data in the correct spot. Each additional file write requires
updating the map and directory and writing the data, which may take several seconds to occur for each file.

Tape file systems instead typically a
llow for the file directory to be spread across the tape intermixed with the data,
referred to as
streaming
, so that time
-
consuming and repeated tape motions are not required to write new data.

However, a side effect of this design is that reading the file

directory of a tape usually requires scanning the entire
tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a
local copy of the tape catalog on a disk file system, so that adding files to

a tape can be done quickly without having
to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time,
at which point the tape must be re
-
scanned if it is to be used in the future.

IBM has develope
d a file system for tape called the
Linear Tape File System
. The IBM implementation of this file
system has been released as the open
-
source
IBM Linear Tape File System



Single Drive ~ion (LTFS

SDE)

product. The Linear Tape File System uses a separate partiti
on on the tape to record the index meta
-
data, thereby
avoiding the problems associated with scattering directory entries across the entire tape.

Tape formatting
[
~ source

|
~
]

Writing data to a tape is often a significantly time
-
consuming process that may

take several hours. Similarly,
completely erasing or formatting a tape can also take several hours. With many data tape technologies it is not
necessary to format the tape before over
-
writing new data to the tape. This is due to the inherently destructive

nature
of overwriting data on sequential media.

Because of the time it can take to format a tape, typically tapes are pre
-
formatted so that the tape user does not need
to spend time preparing each new tape for use. All that is usually necessary is to writ
e an identifying media label to
the tape before use, and even this can be automatically written by software when a new tape is used for the first
time.

Database file systems
[
~ source

|
~
]

Another concept for file management is the idea of

a database
-
based file system. Instead of, or in addition to,
hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or
similar
rich metadata
.
[2]

IBM DB2 for i
[3]

(formerly known as DB2/400 and DB2 for i5/OS)
is a database file system as part of the object
based IBM i
[4]

operating system (formerly known as OS/400 and i5/OS), incorporating a single level store and
running on IBM Power Sys
tems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former
chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester have successfully
designed and applied technologies like the database file sy
stem where others like Microsoft later failed to
accomplish.
[5]

These technologies are informally known as 'Fortress Rochester' and were in few basic aspects
extended from early Main
frame technologies but in many ways more advanced from a technology perspective.

Some other projects that aren't "pure" database file systems but that use some aspects of a database file system:



A l ot of
Web
-
CMS

use a
relational DBMS

to store and retrieve files. Examples:
XHTML

files are stored as
XML

or text
fi elds, i mage files are stored as blob fields;
SQL

SELECT (wi th optional
XPath
) statements retrieve the files, and allow
the use of a sophisticated logic and more ri ch i nformation associations t
han "usual file systems".



Very l arge file systems, embodied by applications l ike
Apache Hadoop

and
Google File System
, use some
database file
system

concepts.

Transactional file systems
[
~ source

|
~
]

Some programs need to update multiple files "all at once". For example, a software installation may write program
binaries
, libraries, and configuration files. If the software installation fails, the program may be unusable. If the
installation is upgrading a key system utility, such as the command
shell
, the entire system may be left in an
unusable state.

Transaction proce
ssing

introduces the
isolation

guarantee, which states that operations within a transaction are
hidden from other threads on the system until the tr
ansaction commits, and that interfering operations on the system
will be properly
serialized

with the transaction. Transactions also provide the
atomicity

guarantee, that operations
inside of a transaction are either all committed, or the transaction can be aborted and the system discards all of its
partial results. This means that if

there is a crash or power failure, after recovery, the stored state will be consistent.
Either the software will be completely installed or the failed installation will be completely rolled back, but an
unusable partial install will not be left on the sys
tem.

Windows, beginning with Vista, added transaction support to
NTFS
, in a feature called
Transactional NTF
S
, but its
use is now discouraged.
[6]

There are a number of research prototypes of transactional file systems for UNIX systems,
including the Valor file system,
[7]

Amino,
[8]

LFS,
[9]

and a transactional
ext3

file system on the TxOS kernel,
[10]

as
well as transactional file systems targeting embedded systems, such as TFFS.
[11]

Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system
transactions.
File locking

can be used as a
concurrency control

mechanism for individual files, but it typically does
not protect the directory structure or file metada
ta. For instance, file locking cannot prevent
TOCTTOU

race
conditions on symbolic links. File locking also cannot automatically roll back a failed operation, such as a software
upgrade; this

requires atomicity.

Journaling file systems

are one technique used to introduce transaction
-
level consistency to file system structures.
Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure
consistency at the granularity of a single system
call.

Data backup systems typically do not provide support for direct backup of data stored in a transactional manner,
which makes recovery of reliable and consistent data sets difficult. Most backup software simply notes what files
have changed since a ce
rtain time, regardless of the transactional state shared across multiple files in the overall
dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that
point, and the backup software only backs tha
t up and does not interact directly with the active transactional
databases at all. Recovery requires separate recreation of the database from the state file, after the file has been
restored by the backup software.

Network file systems
[
~ source

|
~
]

~:
Di stributed file system

A
network file system

is a file system that acts as a client for a remote file access protocol, providing access to files
on

a server. Examples of network file systems include clients for the
NFS
,
AFS
,
SMB

protocols, and file
-
system
-
like clients for
FTP

and
WebDAV
.

Shared disk file systems
[
~ source

|
~
]

~:
Shared disk fi
le system

A
shared disk file system

is one in which a number of machines (usually servers) all have access to the same
external disk subsystem (usually a SAN). The file system arbitrates access to that subsystem, preventing write
collisions. Examples inclu
de
GFS2

from
Red Hat
,
GPFS

from IBM,
SFS

from DataPlow,
CXFS

from
SGI

and
StorNext

from
Quantum Corporation
.

Special file systems
[
~ source

|
~
]

A
special file system

presents non
-
file elements of an operating system as files so they can be acted on using file
system APIs. This is most commonly do
ne in
Unix
-
like

operating systems, but devices are given file names in some
non
-
Unix
-
like operating systems as well.

Device file systems
[
~ source

|
~
]

A
device file system

represents I/O devices and pseudo
-
devices as files, called
device files
. Examples in
Unix
-
like

systems include
devfs

and, in
Linux

2.6 systems,
udev
. In non
-
Unix
-
like systems, such as
TOPS
-
10

and other
operating systems influenced by it, where the full filename or
pathname

of a file can include a device prefix, devices
other than those containing file systems are referred to by a device prefix specifying the device, without anything
following it.

Other special file systems
[
~ source

|
~
]



In the Li nux kernel,
confi gfs

and
sysfs

provi de fi l es that can be used to query the kern
el for i nformati on
and confi gure enti ti es i n the kernel.



procfs

maps processes and, on Li nux, other operati ng system structures i nto a fi l espace.

7.Partitioning schemes[
~

source

|
~
]

DOS, Windows, and OS/2[
~

source

|
~
]

With
DOS
,
Micro
soft Windows
, and
OS/2
, a common practice is to use one primary partition for the active
file system

that will
contain the

operating system, the page/swap file, all utilities, applications, and user data. On most Windows consumer computers,
the
drive letter

C: is routinely assign
ed to this primary partition. Other partitions may exist on the HDD that may or may not be
visible as drives, such as recovery partitions or partitions with diagnostic tools or data. (Microsoft drive letters do not c
orrespond
to partitions in a one
-
to
-
one
fashion, so there may be more or fewer drive letters than partitions.)

Microsoft
Windows 2000
,
XP
,
Vista
, and
Windows 7

include a '
Disk Management
' program which allows for the creation,
deletion and resizing of FAT and NTFS partitions. The Windows Disk Manager in Windows Vista and Windows 7 utilizes a new
1 MB partition alignment

scheme which is fundamentally
incompatible

with Windows 2000, XP, OS/2, DOS
as well as many
other operating systems.

Unix
-
like systems[
~

source

|
~
]

On
Unix
-
based and
Unix
-
like

operating

systems such as
GNU/Linux
,
OS X
,
BSD
, and
Solaris
, it is possible to use multiple
partitions on a disk device. Each partition can be formatted with a
file system

or as a
swap partition
.

Multiple partitions allow directories such as
/tmp
,
/usr
,
/var
, or
/home

to be allocated their own filesystems. Such a scheme has
a
number of advantages:



If one file system gets corrupted, the data outside that filesystem/partition may stay intact, minimizing data loss.



Specific file systems can be mounted with different parameters e.g.
read
-
only
, or with the execution of
setuid

files
disabled.



A runaway program that uses up all available space on a non
-
system filesystem does

not fill up critical filesystems.

A common default for GNU/Linux desktop systems is to use two partitions: one holding a file system mounted on "/" (the
root
directory
) and a
swap partition
.
[
citation needed
]

By default, OS X systems also use a singl
e partition for the entire filesystem and use a
swap file

inside the file system (like
Windows) rather than a swap partition.

In Solaris, partitions are sometimes known as
slices
. This
is a conceptual reference to the slicing of a cake into several pieces.

The term "slice" is used in the
FreeBSD

operating system to refer to
Master Boot Record

partitions, to avoid confusion with
FreeBSD's own
disklabel
-
based partitioning scheme. However,
GUID Partition Table

partitions are referred to as "partition"
world
-
wide.

Multi
-
boot and mixed
-
boot systems[
~

source

|
~
]

Multi
-
boot system
s are computers where the user can boot into one of two or more distinct operating systems (OS) stored in
separate storage devices or in separate partitions of the same storage device. In such systems a menu at
startup

gives a choice of
which OS to boot/start (and only one OS at a time is loaded).

This is distinct from
virtual operating systems
, in which one ope
rating system is run as a self
-
contained virtual "program" within
another already
-
running operating system. (An example is a Windows OS "virtual machine" running from within a Linux OS.)