Er. Mukesh Azad
processes of encoding a
produced by the
and treats the 'meaning' of a message (in the human
sense) as irrelevant. Proposed together b
y the US mathematicians Claude Shannon (1916
and Warren Weaver (1894
1978) in 1949, it focuses on how to transmit data most efficiently and
economically, and to detect
its transmission and
Information, in its most restricted technical sense, is an ordered
. As a
, however, information has many meanings. Moreover, the concept of information is
y related to notions of
Complex definitions of both "information" and "knowled
ge" make such semantic and logical
analysis difficult, but the condition of "transformation" is an important point in the study of
information as it relates to knowledge, especially in the business discipline of
. In this practice, tools and processes are used to assist a
performing research and
making decisions, including steps such as:
reviewing information in order to effectively derive value and meaning
if any is available
establishing a relevant
, often selecting from many possible contexts
from the information
making decisions or
from the resulting knowledge.
Need for Information
is an individual or group's desire to locate and obtain
to satisfy a
conscious or unconscious
. The ‘information’ and ‘need’ in ‘information need’ are
nterconnection. Needs and interests call forth information. The objectives of
studying information needs are:
The explanation of observed phenomena of information use or expressed need;
The prediction of instances of information uses;
The control and there
by improvement of the utilization of information manipulation of
Information needs are related to, but distinct from
. An example is that
a need is hunger, the requirement is food.
The concept of information needs was coined by an American information scientist
in his article
"The Process of Asking Questions"
published in American Documentation
(Now is Journal of the American Society of Information Science and Technology).
In this paper, Taylor attempted to describe how an inquirer obtains an answer from an
, by performing the process consciously or unconsciously; also
he studied the
reciprocal influence between the inquirer and a given system.
According to Taylor, information need has four levels:
The conscious and unconscious need for information not existing in the remembered
experience of the investigator. In terms o
f the query range, this level might be called the
the question which would bring from the ideal system exactly what
the inquirer, if he could state his need. It is the actual, but unexpressed, need for
The conscious mental de
scription of an ill
defined area of in decision. In this level, the
inquirer might talk to someone else in the field to get an answer.
A researcher forms a rational statement of his question. This statement is a rational and
unambiguous description of the
The question as presented to the information system.
Quality of Information
(IQ) is a term to describe the quality of the content of
is often pragmatically defined as: "The fitness for use of the information provided."
"Information quality" is a measure of the value which the information provides to the user of that
information. "Quality" is often perceived as subjective and the quality of information can then
vary among users and among uses of the information. Neverthel
ess, a high degree of quality
increases its objectivity or at least the
. Accuracy can be seen as just one element
of IQ but, depending upon how it is
defined, can also be seen as encompassing many other
dimensions of quality.
If not, it is perceived that often there is a trade
off between accuracy and other dimensions,
aspects or elements of the information determining its suitability for any given task
s. A list of
dimensions or elements used in assessing subjective Information Quality is:
, Amount of
, Ease of understanding, Concise representation,
, Access security
In a nutshell, those proposed quality metrics for any information are:
rs to the expertise or recognized official status of a source. Consider the reputation
of the author and publisher. When working with legal or government information, consider
whether the source is the official provider of the information. Verifiability re
fers to the ability of
a reader to verify the validity of the information irresepective of how authoritative the source is.
To verify the facts is part of the duty of care of the journalistic deontology, as well as, where
possible, to provide the sources o
f information so that they can be verified
Scope of coverage
Scope of coverage refers to the extent to which a source explores a topic. Consider time periods,
geography or jurisdiction and coverage of related or narrower topics.
Composition and Organizatio
Composition and Organization has to do with the ability of the information source to present it’s
particular message in a coherent, logically sequential manner.
Objectivity is the bias or opinion expressed when a writer interprets or analyze
the use of persuasive language, the source’s presentation of other viewpoints, it’s reason for
providing the information and advertising.
to moral and ethical principles; soundness of moral character 2.the state of being
whole, entire, or undiminished
1. of large scope; covering or involving much; inclusive: a comprehensive study. 2.
comprehending mentally; having an exten
sive mental grasp. 3. Insurance. covering or providing
broad protection against loss.
Validity of some information has to do with the degree of obvious truthfulness which the
As much as ‘uniqueness’ of a given piece o
f information is intuitive in meaning, it also
significantly implies not only the originating point of the information but also the manner in
which it is presented and thus the perception which it conjures. The essence of any piece of
information we proces
s consists to a large extent of those two elements.
Timeliness refers to information that is current at the time of publication. Consider publication,
creation and revision dates. Beware of Web site scripting that automatically reflects the curr
day’s date on a page.
Reproducibility (utilized primarily when referring to instructive information)
(IS) is any combination of
activities using that technology to support operations, management, and decision
making. In a
very broad sense, the term
is frequently used to refer to the interaction
processes, data and technology. In this sense, the term is used to
refer not only to the
information and communication technology
(ICT) an organization uses, but
also to the way in which people interact with this tech
nology in support of business processes.
Some make a clear distinction between information systems, ICT and business processes.
Information systems are distinct from information technology in that an information system is
typically seen as having an ICT co
mponent. Information systems are also different from business
processes. Information systems help to control the performance of business processes
Alter argues for an informati
on system as a special type of work system. A work system is a
system in which humans and/or machines perform work using resources (including ICT) to
produce specific products and/or services for customers. An information system is a work system
vities are devoted to processing (capturing, transmitting, storing, retrieving,
manipulating and displaying) information.
Part of the difficulty in defining the term
is due to vagueness in the definition
of related terms such as
Davies argues for a clearer terminology
. He defines an information system as an example of a
the manipulation of
. An information system is a type of socio
system. An information system is a mediating construct between actions and technology.
As such, information systems
on the one hand and
on the other. An informatio
n system is a form of
system in which data represent
and are processed as a form of social memory. An information system can also be considered a
ge which supports human
Value of Information
Value of information
(VOI or VoI) is the amount a decision maker would be willing to pay for
information prior to making a decision.
There are two extremely important characteristics of VoI that always hold for any decision
Value of information can never be less than zero since the decision
always ignore the ad
ditional information and makes decision as if such
information is not available.
No other information gathering/sharing activities can be more valuable than that
quantified by value of clairvoyance.
VoC is derived strictly following its definit
ion as the monetary amount that is big enough to just
offset additional benefit of getting more information. In other words; VoC is calculated
"value of decision situation with perfect information while paying VoC" = "value of current
A special case is when the decision
where VoC can be simply computed as;
VoC = "value of decision situation with perfect information"
lue of current decision
This special case is how
expected value of perfect information
expected value of sample
are calculated where
tly assumed. For cases where decision
, this simple calculation does n
ot necessary yield correct
result, and iterative calculation is the only way to ensure correctness.
are most commonly used in representing and solving
decision situation as well as associated VoC calculation. Influence diagram, in particular, is
ed to accommodate team decision situation where incomplete sharing of information
among team members can be represented and solved very efficiently. While decision tree is not
designed to accommodate team decision situation, it can do so by augmenting it w
widely used in
Categories and Levels of Information in Business Organizat
When developing an information management strategy within an organisation, it is useful to
consider information needs on three levels:
team, division, business unit, etc
The needs of each of these three levels must be met if a
coordinated and effective solution is to
be maintained in the long
Failure to address any one of the levels will lead to areas of the business or individuals finding
their own solution, which may not fit well within the strategic goals of the organis
These are not new ideas, but they will be explored in the context of intranets and other corporate
At the top is the corporate information that is useful for the whole organisation. This ‘global’
information is general
ly fairly well addressed by the corporate intranet (even if the intranet itself
Examples of corporate information include policies and procedures, HR information, online
forms, phone directory, etc.
Interestingly, there may be a limited
amount of truly global information, and it may not deliver
the greatest (measurable) business benefits.
Team, division, business unit
The middle level is perhaps the most interesting, as it covers all the information shared within
teams, divisions, busine
ss units, etc. This information may be critical to the day
of the group, but of little interest to the rest of the organisation.
Examples include project documentation, business unit specific content, meeting minutes, etc.
This level is g
served within organisations, although collaboration tools are
increasingly being used to address team information needs. It is also being recognised that it is
this ‘local’ information that may be the most valuable, in terms of driving the
of the organisation.
At the lowest level is the personal information needs of staff throughout the organisation.
Examples include correspondence (both internal and external), reports and spreadsheets.
In most organisations,
staff must struggle with using e
mail to meet their information
management needs. While staff generally recognise the inadequacy of e
mail, they have few
other approaches or technologies at their disposal.
Note that some organisations (such as consulting
firms) are heavily dependent on personal
information management amongst their staff.
Managing the levels
When managing the information within each of the three levels, consider the following:
An information management solution must be provided for staff at
each of the three levels.
If corporate solutions aren’t provided, then staff will find their own solutions.
This is the source of poor
quality intranet sub
sites, and other undesirable
A clear policy must be developed, outlining when each of
the three levels applies,
and how information should be managed within each level.
Processes must be put in place to ‘bubble up’ or ‘promote’ information from
lower levels up to higher levels. For example, some team
will be critical
for the whole organisation.
As much as possible, a seamless information management environment should be
delivered that covers all three levels.
Data concepts and Data Processing
that uses a
analyse or otherwise
. The process may be automated and
run on a
. It involves recording, analysing, sorting, summarising, calculating,
g and storing data. Because data is most useful when well
presented and actually
processing systems are often referred to as
the terms are roughly synonymous, performing similar conversions; data
typically manipulate raw data into information, and likewise information systems typically take
raw data as input to produce information as output.
cessing may or may not be distinguished from
, when the process is
merely to convert
, and does not involve any
Elements of data processing
In order to be processed
by a computer, data needs first be converted into a machine readable
format. Once data is in digital format, various procedures can be applied on the data to get useful
information. Data processing may involve various processes, including:
Data Processing System
has been captured and
recognizable by the data processing system or has
been created and stored by another unit of an
information processing system
A software code
) is an example of a
system. The software data processing system makes use of a (general purpose) computer in order
to complete its functions. A software data processing system is normally a standalone unit of
software, in that its output can be directed to any number of other
(not necessarily as yet
identified) information processing (sub)systems.
Most modern computer systems do not represent numeric values using the decimal system.
Instead they typically use a binary or two's complement numbe
ring system. To understand the
limitations of computer arithmetic you must understand how computers represent numbers.
A Review of the Decimal System
You've been using the decimal (base 10) numbering system for so long that you probably take it
. When you see a number like "123" you don't think about the value 123; rather you
generate a mental image of how many items this value represents. In reality however the number
123 represents ("**" represents exponentiation):
1*10**2 + 2 * 10**1 + 3*10**
Each digit appearing to the left of the decimal point represents a value between zero and nine
times an increasing power of ten. Digits appearing to the right of the decimal point represent a
value between zero and nine times an increasing n
egative power of ten. For example the value
1*10**2 + 2*10**1 + 3*10**0 + 4*10**
1 + 5*10**
2 + 6*10**
100 + 20 + 3 + 0.4 + 0.05 + 0.006
The Binary Numbering System
Most modern computer systems (including the IBM PC) operate using bin
ary logic. The
computer represents values using two voltage levels (usually 0v and +5v). With two such levels
we can represent exactly two different values. These could be any two different values but by
convention we use the values zero and one. These two
values coincidentally correspond to the
two digits used by the binary numbering system. Since there is a correspondence between the
logic levels used by the 80x86 and the two digits used in the binary numbering system it should
come as no surprise that th
e IBM PC employs the binary numbering system.
The binary numbering system works just like the decimal numbering system with two
exceptions: binary only allows the digits 0 and 1 (rather than 0
9) and binary uses powers of two
rather than powers of ten. Th
erefore it is very easy to convert a binary number to decimal. For
each "1" in the binary string add in 2**n where "n" is the zero
based position of the binary digit.
For example the binary value 11001010 represents:
1*2**7 + 1*2**6 + 0*2**5 + 0*2**4 + 1*
2**3 + 0*2**2 + 1*2**1 + 0*2**0
128 + 64 + 8 + 2
202 (base 10)
To convert decimal to binary is slightly more difficult. You must find those powers of two which
when added together produce the decimal result. The easiest method is to work from the a
power of two down to 2**0. Consider the decimal value 1359:
2**10=1024 2**11=2048. So 1024 is the largest power of two less than 1359.
Subtract 1024 from 1359 and begin the binary value on the left with a "1" digit.
Binary = "1" Decimal result is 1
1024 = 335.
The next lower power of two (2**9= 512) is greater than the result from above so
add a "0" to the end of the binary string. Binary = "10" Decimal result is still 335.
The next lower power of two is 256 (2**8). Subtract this from 335 and a
dd a "1"
digit to the end of the binary number. Binary = "101" Decimal result is 79.
128 (2**7) is greater than 79 so tack a "0" to the end of the binary string. Binary
= "1010" Decimal result remains 79.
The next lower power of two (2**6 = 64) is less tha
n79 so subtract 64 and append
a "1" to the end of the binary string. Binary = "10101" Decimal result is 15.
15 is less than the next power of two (2**5 = 32) so simply add a "0" to the end of
the binary string. Binary = "101010" Decimal result is still 15.
16 (2**4) is greater than the remainder so far so append a "0" to the end of the
binary string. Binary = "1010100" Decimal result is 15.
2**3(eight) is less than 15 so stick another "1" digit on the end of the binary
string. Binary = "10101001" Decimal re
sult is 7.
2**2 is less than seven so subtract four from seven and append another one to the
binary string. Binary = "101010011" decimal result is 3.
2**1 is less than three so append a one to the end of the binary string and subtract
two from the decimal
value. Binary = "1010100111" Decimal result is now 1.
Finally the decimal result is one which is2**0 so add a final "1" to the end of the
binary string. The final binary result is "10101001111"
Binary numbers although they have little importance in high l
evel languages appear everywhere
in assembly language programs.
In the purest sense every binary number contains an infinite number of digits (or bits which is
short for binary digits). For example we can represent the number five by:
00000101 0000000000101 ... 000000000000101
Any number of leading zero bits may precede the binary number without changing its value.
We will adopt the convention ignoring any leading zeros. For example 101 (binary) represents
the number five. Since the 80
x86 works with groups of eight bits we'll find it much easier to zero
extend all binary numbers to some multiple of four or eight bits. Therefore following this
convention we'd represent the number five as 0101 (binary) or 00000101 (binary).
In the United
States most people separate every three digits with a comma to make larger
numbers easier to read. For example 1 023 435 208 is much easier to read and comprehend than
1023435208. We'll adopt a similar convention in this text for binary numbers. We will se
each group of four binary bits with a space. For example the binary value 1010111110110010
will be written 1010 1111 1011 0010.
We often pack several values together into the same binary number. One form of the 80x86
MOV instruction (see appendix D)
uses the binary encoding 1011 0rrr dddd dddd to pack three
items into 16 bits: a five
bit operation code (10110) a three
bit register field (rrr) and an eight
immediate value (dddd dddd). For convenience we'll assign a numeric value to each bit positi
We'll number each bit as follows:
The rightmost bit in a binary number is bit position zero.
Each bit to the left is given the next successive bit number.
bit binary value uses bits zero through seven:
X7 X6 X5 X4 X3 X2 X1 X0
bit binary value uses bit positions zero through fifteen:
X15 X14 X13 X12 X11 X10 X9 X8 X7 X6 X5 X4 X3 X2 X1 X0
Bit zero is usually referred to as the low order ( L.O.) bit. The left
most bit is typically called the
high order ( H.O.) bit. We'll refe
r to the intermediate bits by their respective bit numbers
Definition of an Electronic Digital Computer
that receives input, stores and manipulates
provides output in a useful format.
Although mechanical examples of computers have existed through much of recorded human
history, the first electronic computer
s were developed in the mid
20th century (1940
These were the size of a large room, consuming as much power as several hundred modern
personal computers (
ern computers based on
are millions to billions
of times more capable than the early machines, and occupy a fraction of the space. Simple
are small enough to fit into
small pocket devices
, and can be powered by a small
in their various forms are
and are what
most people think of as "computers".
found in many devices
are the most numerous.
The ability to store and execute lists of instructions called
makes computers extremely
versatile, distinguishing them from
is a mathematical
statement of this versatility
: any computer with a certain minimum capability is, in principle,
capable of performing the same tasks that any other computer can perform. Therefore computers
ranging from a
are all able to perform the same computational tasks,
given enough time and storage capacity.
The first use of the word "computer" was recorded in
1613, referring to a person who carried out
calculations, or computations, and the word continued to be used in that sense until the middle of
the 20th century. From the end of the 19th century onwards though, the word began to take on its
more familiar me
aning, describing a machine that carries out computations.
The history of the modern computer begins with two separate technologies
but no single device can be identified as the earliest computer,
f the inconsistent application of that term. Examples of early mechanical calculating devices
and arguably the
(which dates from about 15
Hero of Alexandria
70 AD) built a mechanical
theater which performed a play lasting 10
minutes and was operated by a complex system of
ropes and drums
that might be considered to be a means of deciding which parts of the
mechanism performed which actions and when.
This is the essence of programmability.
The "castle clock", an
in 1206, is considered to be the
It displayed the
travelling across a gateway causing
to open every
, and five
musicians who played music when struck by
operated by a
attached to a
. The length of
could be re
programmed to compensate
for the changing lengths of day and night throughout the year
saw a re
invigoration of European mathematics and engineering.
's 1623 device was th
e first of a number of mechanical calculators constructed by
European engineers, but none fit the modern definition of a computer, because they could not be
Joseph Marie Jacquard
made an improvement to the
by introducing a series
punched paper c
as a template which allowed his loom to weave intricate patterns
automatically. The resulting Jacquard loom was an important step in the development of
computers because the use of punched cards to define woven patterns can be viewed as an early,
it limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first
recognizable computers. In 1837,
was the first to conceptualize and design a
fully programmable mechanical computer, his
finances and Babbage's
inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s,
invented the recording
of data on a machine readable
medium. Prior uses of machine readable media, above, had been for control, not data. "After
some initial trials with paper tape, he settled on punched cards
To process these punched
cards he invented the
, and the
machines. These three inventions were the
foundation of the modern information processin
g industry. Large
scale automated data
processing of punched cards was performed for the
1890 United States Census
company, which later bec
ame the core of
. By the end of the 19th century a number of
technologies that would later prove useful in the realization of practical computers had begun to
(thermionic valve) and the
During the first half of the 20th century, many scientific
needs were met by
, which used a direct mechanical or
of the problem as a basis for
. However, these were not programmable and generally
lacked the versatility and accuracy of modern digital comp
is widely regarded to be the father of modern
. In 1936 Turing
provided an influential formalisation of the concept of the
and computation with the
. Of his role in the modern computer,
magazine in naming Turing one of
100 most influential
people of the 20th century, states: "The fact remains that everyone who
taps at a key
board, opening a spreadsheet or a word
processing program, is working on an
incarnation of a Turing machine".
The inventor of the program
controlled computer was
, who built the first working
computer in 1941 and later in 1955 the first computer based on magnetic storage.
is internationally recognized as a father of the modern digital computer. While
working at Bell Labs in November 1937, Stibitz invented and built a relay
dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to
circuits to perform
an arithmetic operation
. Later models added greater sophistication
including complex arithmetic and programmability.
Generations of Computers
The history of
development is often referred to
in reference to the different generations of computing
. Each generation of computer is characterized by a
hnological development that fundamentally
changed the way computers operate, resulting in
increasingly smaller, cheaper, more powerful and more
efficient and reliable devices.
First Generation (1940
1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and
, and were often enormous,
taking up entire room
s. They were very expensive to
operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of
First generation computers relied on
level programming language understood by
computers, to perform operations, and they could only
solve one problem at a time. Input was based on punched
cards and paper tape, and output was displayed on
he UNIVAC and
computers are examples of
generation computing devices. The UNIVAC was the
first commercial computer delivered to a business client,
the U.S. Census Bureau in 1951.
The Five Generations of
Related Articles on
Brief Timeline of
Types of Internet
The 7 Layers of the
Second Generation (1956
replaced vacuum tubes and ushered in the second gener
ation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy
efficient and m
ore reliable than their first
Though the transistor still generated a great deal of heat that subjected the computer to damage,
it was a vast improvement over the vacuum tube. Second
generation computers still relied on
ds for input and printouts for output.
generation computers moved from
to symbolic, or
ages, which allowed programmers to specify instructions in words.
were also being developed at this time, such as early versions of
. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core t
The first computers of this generation were developed for the atomic energy industry.
Third Generation (1964
1971) Integrated Circuits
The development of the
was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on
drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through
, which allowed the device to
run many different
at one time with a central program that monitored the memory.
Computers for the first time became accessible to
a mass audience because they were smaller
and cheaper than their predecessors.
Fourth Generation (1971
brought the fourth generation of computers,
as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer
central processing unit
and memory to input/output
on a single chip.
introduced its first computer for the home user, and in 1984
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of
Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on
, are still in development,
though there are some applications, such as
, that are being used today. The use
and superconductors is helping to make artificial intelligence a reality.
and molecular and
will radically change the face of
computers in years to come. The goal of fifth
puting is to develop devices that
input and are capable of learning and self
Characteristics of computer
days computer is playing a main role in everyday life it has become the need of people
just like television, telephone or other electronic devices at home. It solves the human problems
very quickly as well as accuratly. The important characteristics
of a computer are described
The computer is a very high speed electronic device. The operations on the data inside the
computer are performed through electronic circuits according to the given instructions. The data
and instructions flow
along these circuits with high speed that is close to the speed of light.
Computer can perform million of billion of operations on the data in one second. The computer
generates signals during the operation process therefore the speed of computer is usuall
measure in mega hertz (MHz) or gega hertz (GHz). It means million cycles units of frequency is
hertz per second. Different computers have different speed.
2. Arithmetical and Logical Operations
A computer can perform arithmetical and logical operations
. In arithmetic operations, it performs
the addition, subtraction, multiplication and division on the numeric data. In logical operation it
compares the numerical data as well as alphabetical data.
In addition to being very fast, computer is
also very accurate device. it gives accurate output
result provided that the correct input data and set of instructions are given to the computer. It
means that output is totally depended on the given instructions and input data. If input data is in
t then the resulting output will be in
correct. In computer terminology it is known as
The electronic components in modern computer have very low failure rate. The modern
computer can perform very complicated calcul
ations without creating any problem and produces
consistent (reliable) results. In general, computers are very reliable. Many personal computers
have never needed a service call. Communications are also very reliable and generally available
A computer has internal storage (memory) as well as external or secondary storage. In secondary
storage, a large amount of data and programs (set of instructions) can be stored for future use.
The stored data and programs are available any ti
me for processing. Similarly information
downloaded from the internet can be saved on the storage media.
6. Retrieving data and programs
The data and program stored on the storage media can be retrieved very quickly for further
processing. It is also ver
y important feature of a computer.
A computer can automatically perform operations without interfering the user during the
operations. It controls automatically different devices attached with the computer. It executes
automatically the prog
ram instructions one by one.
Versatile means flexible. Modern computer can perform different kind of tasks one by one of
simultaneously. It is the most important feature of computer. At one moment your are playing
game on computer, the nex
t moment you are composing and sending emails etc. In colleges and
universities computers are use to deliver lectures to the students. The talent of computer is
dependent on the software.
Today computer is mostly used to exchange messag
es or data through computer networks all
over the world. For example the information can be received or send throug the internet with the
help of computer. It is most important feature of the modern information technology.
A computer can co
ntinually work for hours without creating any error. It does not get tired while
working after hours of work it performs the operations with the same accuracy as well as speed
as the first one.
11. No Feelings
Computer is an electronic machine. It has no
feelings. It detects objects on the basis of
instructions given to it. Based on our feelings, taste, knowledge and experience: we can make
certain decisions and judgments in our daily life. On the other hand, computer can not make such
judgments on their o
wn. Their judgments are totally based on instructions given to them.
People often have difficulty to repeat their instructions again and again. For example, a lecturer
feels difficulty to repeat a same lecture in a class room again and aga
in. Computer can repeat
actions consistently (again and again) without loosing its concentration
To run a spell checker (built into a word processor) for checking spellings in a document.
To play multimedia animations for training purposes.
To deliver a l
ecture through computer in a class room etc.
A computer will carry out the activity with the same way every time. You can listen a lecture or
perform any action again and again.
Computers are not only fast and consistent but they also perform operations very accurately and
precisely. For example, in manual calculations and rounding fractional values (That is value with
decimal point can change the actual result). In computer howev
er, you can keep the accuracy and
precision upto the level, you desire. The length calculations remain always accurate.
APPLICATIONS OF COMPUTERS
The dawn of the new age
The Computer Era
glows before us with the promise of new and
improved ways of thin
king, living and working. The amount of information in the world is said
to be doubling every six to seven years. The only way to keep up with these increased amounts
of data and information is to understand how computers work and the ability to control th
a particular purpose.
A computer can be defined as an electronic data processing device, capable of accepting data,
applying a prescribed set of instructions to the data, and displaying in some manner or form. Any
configuration of the devices that
are interconnected and are programmed to operate as a
computer system. The computer is said to have literally revolutionised the way one person does
his job or a whole multinational organisation operates their businesses. Together with this reason
more, computers are considered more than just an essential piece of fancy equipment.
Whether or not people know anything about it, they invoke computers in every day lives when
they make a bank withdrawal, buy groceries at the supermarket and even when th
ey drive a car.
Today, millions of people are purchasing fully functional personal computers for individual
Elements of Computers Processing System Hardware CPU
Central Processing Unit
) or the
is the portion of a computer system that
the instructions of a
, and is the primary element carrying out the computer's
functions. This term ha
s been in use in the computer industry at least since the early 1960s. The
form, design and implementation of CPUs have changed dramatically since the earliest
examples, but their fundamental operation remains much the same.
Early CPUs were custom
designed as a part of a larger, sometimes one
However, this costly method of
custom CPUs for a particular application has largely
ven way to the development of mass
produced processors that are made for one or many
purposes. This standardization trend generally began in the era of discrete
and has rapidly accelerated with the popularization of the
(IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to
tolerances on the order of
. Both the miniaturization and standardization of CPUs
have increased the presence of these digital devices in modern life far beyond the limited
application of dedicated computing machines. Modern microprocessors appear in ever
and children's toys.
: In generally used
one can find that there are four basic
computer elements following are the elements.
(1) ALU (
Logic Unit): The ALU is the digital
that is able to perform different
types of functions, such as Addition, Subtraction, and Multiplication etc.
(2) Control Unit: The control unit is the
(central processing unit) or other devices
that performs the duty to direct its operations, the control unit is just like a finite state machine
that has some finite states and the transaction
from one state to the other state is called action.
(3) Memory: The memory is another very important
of the computer without memory
the computer can't operate in today's modern age it is called the RAM (Random Access Memory)
when ever we give some instruction to the computer it passes through RAM to the Processor and
the processor processes it and send i
t back. The main reason behind memory is that it is faster in
then the other memory devices such as hard disk etc.
/output devices: The
input and output
devices is another important element for example
the processor is the input device and it gives its output to the monitor etc for output.
There is another ve
ry important point the basic elements are considered as CPU (central
Processing Unit), I/O devices, and Memory actually the CPU is the composition of different
other elements such as ALU, Control Unit, Registers which is another type of memory.
is a device attached to a host computer but not part of it, and is more or less
dependent on the host. It expands the host's capabilities, but does not form part of the core
Whether something is a peripheral or part of a computer is not always clearly
video capture card
inside a computer case is not part of the core computer but is contained in the
case. However, whether something can be considered a p
eripheral or not is a trivial matter of
nomenclature, and is not a significant issue.
Graphical output device
data storage device
is a device for recording (storing) information (data).
done using virtually any form of
, spanning from manual muscle power in
acoustic vibrations in
recording, to electromagnetic energy modulating
A storage device may hold information, process information, or both. A device that only holds
. Devices that process information (data storage equipment)
may either access a separate portable (removable) recording medium or a perma
to store and retrieve information.
Electronic data storage
is storage which requires electrical power to store and retrieve that data.
Most storage devices that do not require
and a brain to read data fall into this category.
Electromagnetic data may be stored in either an
format on a variety of media.
This type of data is considered to be
data, whether or not it is
, for it is certain that a semiconductor device was
record it on its medium. Most electronically processed data storage media (including
some forms of
computer data storage
) are considered permanent (non
storage, that is,
the data will remain stored when power is removed from the device. In contrast, most
information within most types of semiconductor (computer chips)
, for it vanishes if power is removed.
With the exception of
data, electronic data storage is easier to revise and may
be more cost effective than alternative methods due to smaller physical spa
ce requirements and
the ease of replacing (rewriting) data on the same medium. However, the durability of methods
such as printed data is still superior to that of most electronic storage media. The durability
limitations may be overcome with the ease of d
) electronic data.
Data storage equipment
equipment may be considered data storage equipment if it writes to and reads
from a data storage medium. Data storage equipment uses either:
methods requiring mechanical disassembly tools and/or opening
meaning loss of memory if disconnected from the unit.
Software is a general term for the various kinds of
s used to operate
related devices. (The term
describes the physical aspects of computers and related
Software can be thought of as the variable part of a computer and hardware the invariabl
Software is often divided into
software (programs that do work users are directly
interested in) and
software (which includes
s and any program that
supports application software). The
is sometimes used to describe
programming that mediates between application and system software or between two different
kinds of application software (
for example, sending a remote work request from an application in
a computer that has one kind of operating system to an application in a computer with a different
An additional and difficult
classify category of software is the
, which is a small useful
program with limited capability. Some utilities come with operating systems. Like applications,
utilities tend to be separately installable and capable of being used independently from the rest of
the operating system.
s are small applications that sometimes come with the operating system as "accessories."
They can also be created independently using the
or other programming languages.
Software can be purchased or acquired as
d for sale after a trial
(shareware with some capabilities disabled),
(free software but with
public domain software
(free with no restrictions), and
(software where the
is furnished and users agree not to limit the distribution of
Software is often packaged on
. Today, much purchased software,
shareware, and freeware is downloaded over the Internet. A new trend is software that is made
available for use at another site known as an
application service provider
Some general kinds of application software include:
Productivity software, which includes
s, and tools for use by
most computer users
Graphics software for graphic designers
Specialized scientific applications
specific software (for example, for banking, insurance, retail,
is programming that is loaded into a special area on a
on a one
time or infrequent basis so that thereafter it seems to be part of the
Software Role and Categories
can be organized into categories based on common function, type, or field of
use. There are three broad classificat
is the general designation of
computer programs for performing user tasks. Application software may be general purpose
(word processing, web browsers, ...) or have a specific purpose (accounting, truck scheduling,
cation software contrasts with (2)
, a generic term referring to the
computer programs used to start and run computer systems and networks; and (3)
, such as compilers and linkers, used to translate and combine comp
program source code and libraries into executable programs (programs that will belong to one of
the three said categories).
Free application software
Application software suites
aided manufacturing software
Free system software
Computer security software
Data compression software
File comparison tools
Graphical user interface
Identity management systems
Firmware is a combination of software and hardware. Computer chips that have data or programs
recorded on them are firmware. These chips commonly include the following:
EPROMs (erasable programmable read
Firmware in PROM or EPROM is designed to be updated if necessary through a software
You can download firmware updates for many Apple products from
you can search for a particular product's update by entering the product's name and the word
"firmware" in the search box. Some Apple products that have had firmware updates include:
optical drives (including CD and DVD
ROM drives and the SuperDrive)
iPod also has firmware. However, updates for iPod, whether they include firmware, software, or
both are included in the
iPod Software Updater
Not every product has a firmware update. Sometimes an update is installed at the factory. Don't
worry if you don't see an update for the product you are looking for. If you are upgrading from
Mac OS 9 to Ma
c OS X 10.2 or Mac OS X 10.3 on a computer that otherwise meets the system
requirements, make sure that the computer's firmware has been updated by viewing this
Sometimes you may
be asked what version of firmware (also called bootROM) you have on your
computer. Find out how to determine the
your computer has.
Open Firmware is the generic name of firmware complying with
Among Open Firmware's many features, it provides a computer independent device interface
cards. This enables expansion card manufacturers to easily support
several different computer designs without supplying different firmware for each
Generally you won't interact with Open Firmware. An example of when you might need to is if
you see this message on the screen: "
To continue booting, type 'mac
boot' and press return
as of 2010
The concept of "firmware" has evolved to mean almost any programmable content of a hardware
device, not only
for a processor, but also configurations and data for
programmable logic devices
Need for Data Transmission Over Distances
is the physical transfer of
) over a point
point or point
Examples of such channels are
. The data is often represented as an
, such as an
While analog communications is the transfer of continuously varying information signal, digital
communications is the transfer of discrete messages. The messages are either represented by a
sequence of pulses by means of a
transmission), or by a limited set of
continuously varying wave forms (
transmission), using a digital
According to the most common definition of
, both baseband and passband signals
streams are considered as digital transmission, while an alternative definition
only considers the baseband signal as digital, and the passband transmission as a form of
Data transmitted may be digital messages originating from a data source, for example a computer
or a keyboard. It may also be an
signal such as a phone call or a video signal,
into a bit
stream for example using
(PCM) or more advanced
) schemes. This source coding and decoding is carried out by
Serial and parallel transmission
is the sequential
group representing a
or other entity of
. Digital serial transmissions are bits sent
over a single wire, frequency or optical path sequent
ially. Because it requires less
and less chances
for error than parallel transmission, the transfer rate of each
individual path may be faster. This can be used over longer distances as a check digit or
can be sent along
is the simultaneous transmission of the
elements of a character or other entity of data. In
, parallel transmission is
the simultaneous transmission of related signal elements over two or more separate paths. Multiple
electrical wires are used which can transmit multiple bits simultaneously, which allows for higher data
transfer rates than can be achieved with serial transmission. This method is used internally within the
computer, for example the internal buses, and sometimes externally for such things as printers, The
major issue with this is "skewing" because the wire
s in parallel data transmission have slightly different
properties (not intentionally) so some bits may arrive before others, which may corrupt the message. A
parity bit can help to reduce this. However, electrical wire parallel data transmission is theref
reliable for long distances because corrupt transmissions are far more likely.
uses start and stop bits to
signify the beginning bit
ally be transmitted using 10 bits e.g.: A "0100 0001" would become "1 0100 0001 0".
The extra one (or zero depending on
) at the start and end of the transmission tells the
eceiver first that a character is coming and secondly that the character has ended. This method
of transmission is used when data is sent intermittently as opposed to in a solid stream. In the
previous example the start and stop bits are in bold. The start
and stop bits must be of opposite
This allows the receiver to recognize when the second packet of information is being s
uses no start and stop bits but instead synchronizes transmission
speeds at both the receiving
sending end of the transmission using
signal(s) built into
A continual stream of
is then sent between t
he two nodes. Due to there
being no start and stop
the data transfer rate is quicker although more errors will occur, as
the clocks will eventually get out of sync, and the receiving device wou
ld have the wrong time
that had been agreed in
for sending/receiving data, so some
become corrupted (by losing
Ways to get around this problem include re
of the clocks and use of
to ensure the
is correctly interpreted and received
Categories of Transmission Media
is the way or method to transfer
any kind of information from one space to another. Categories of Transmission Media A. Guided
media and B. Unguided Media
A. Guided Media :
Data transmission is the way or method to transfer any kind of information
e space to another. A. Guided Media Guided Transmission Media uses a "cabling"
system that guides the data signals along a specific path. The data signals are bound by the
"cabling" system. Guided Media is also known as Bound Media. There 4 basic types of
Media: 1. Open Wire 2. Twisted Pair 3. Coaxial Cable 4. Optical Fiber
Bandwidth comparison :
Data transmission is the way or method to transfer any kind of
information from one space to another. Bandwidth comparison Cable Type Bandwidth Open
5 MHz Twisted Pair 0
100 MHz Coaxial Cable 0
600 MHz Optical Fiber 0
1. Open Wire :
Data transmission is the way or method to transfer any kind of information from
one space to another. 1. Open Wire
2. Twisted Pair :
Data transmission is
the way or method to transfer any kind of information
from one space to another. 2. Twisted Pair
Data transmission is the way or method to transfer any kind of information from one space to
another. Each pair would consist of a wire used for the +ve data
signal and a wire used for the
ve data signal. Any noise that appears on 1 wire of the pair would occur on the other wire.
Because the wires are opposite polarities, they are 180 degrees out of phase. Numbers of pairs
are bundled together. Twisting decrea
ses crosstalk. When the noise appears on both wires, it
cancels or nulls itself out at the receiving end. It is mainly used for telephone system and for
subscriber’s loop like LAN. The degree of reduction in noise interference is determined
the number of turns per foot. Increasing the number of turns per foot reduces the
Shielded Twisted Pair :
Data transmission is the way or method to transfer any kind of
information from one space to another. Shielded Twisted Pair
a transmission is the way or method to transfer any kind of information from one space to
another. Advantages: 1. Easy to string 2. Cheap Disadvantages: Subject to interference is static
and garble. In comparison to others ( Coax & optical fiber) twisted p
air is limited in bandwidth,
distance and data rate Like if used for digital systems it requires repeater after every 2
Susceptible to noise weather shielded and unshielded
3. Coaxial Cable :
Data transmission is the way or method to transfer any k
ind of information
from one space to another. 3. Coaxial Cable Coaxial Cable consists of 2 conductors. The inner
conductor is held inside an insulator with the other conductor woven around it providing a shield.
An insulating protective coating called a ja
cket covers the outer conductor. The outer shield
protects the inner conductor from outside electrical signals. The distance between the outer
conductor (shield) and inner conductor plus the type of material used for insulating the inner
e the cable properties or impedance.
Data transmission is the way or method to transfer any kind of information from one space to
another. Advantages: Not susceptible to interference Transmits faster With FDM it can carry
10,000 voice channels Disadvantages: Heavy & bulky Needs booster over d
repeaters or amplifiers are needed every few Kms.
4. Optical Fiber :
Data transmission is the way or method to transfer any kind of information
from one space to another. 4. Optical Fiber Optical Fiber consists of thin glass fibers that can
carry information at frequencies in the visible light spectrum and beyond. The typical optical
fiber consists of a very narrow strand of glass called the Core. Around the Core is a concentric
layer of glass called the Cladding. A typical Core diameter is 6
2.5 microns (1 micron = 10
meters). Typically Cladding has a diameter of 125 microns. Covering the cladding is a protective
coating consisting of plastic, it is called the Jacket.
Principle working of Optical Fiber :
Data transmission is the way or meth
od to transfer any
kind of information from one space to another. Principle working of Optical Fiber Characteristic
of Fiber Optics is Refraction. The core refracts the light and guides the light along its path. The
cladding reflects any light back into th
e core and stops light from escaping through it
Advantages: Noise immunity: RFI and EMI immune (RFI
Radio Frequency Interference, EMI
Electro Magnetic Interference) Security: cannot tap into cable. Large Capacity due to BW
width) No corrosion Longer distances than copper wire Smaller and lighter than copper
wire Faster transmission rate
Disadvantages : Physical vibration will show up as signal noise. Limited physical arc of cable.
Bend it too much & it will break. Difficult
to split. Copper: It has low resistance to electrical
current and signal travels farther. guided medium bandwidth
depends on the thickness of the
wire and the distance traveled. interference
susceptible to electromagnetic waves generated by
g wires. Two main types
twisted pair and coaxial cable (coax)
The Advantages of Fiber over Copper: Interference
does not cause interference; and is not
susceptible to interference. Bandwidth
handles much higher bandwidth than copper Low
requires fewer repeaters and amplifiers (every 30 Km vs. 5Km) Immune to power
surges, failures, and other electromagnetic interference thin and lightweight don’t leak light;
tough to tap (secure)
Networking of Computers
Introduction of LAN & WAN
y to categorize the different types of computer network designs is by their scope or scale.
For historical reasons, the networking industry refers to nearly every type of design as some kind
. Common examples of area network types are:
Local Area Network
Wireless Local Area Network
Wide Area Network
Metropolitan Area Network
Storage Area Network, System Area Network, Server Area Network, or
sometimes Small Area Network
Campus Area Network, Contro
ller Area Network, or sometimes Cluster Area
Personal Area Network
Desk Area Network
LAN and WAN were the original categories of area networks, while the others have gradually
emerged over many years of technology evolution.
at these network types are a separate concept from network topologies such as bus, ring
Local Area Network
connects network devices over a relatively short
distance. A networked office building,
school, or home usually contains a single LAN, though sometimes one building will contain a
few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby
networking, a LAN is often but not always implemented as a single IP
In addition to operating in a limited sp
ace, LANs are also typically owned, controlled, and
managed by a single person or organization. They also tend to use certain connectivity
Wide Area Network
As the term implies, a
spans a large physical distance. The Internet is the largest WAN,
spanning the Earth.
A WAN is a geographically
dispersed collection of LANs. A network device called a
onnects LANs to a WAN. In IP networking, the router maintains both a LAN address and a
A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not
owned by any one organization but rather exist under collective o
r distributed ownership and
management. WANs tend to use technology like
over the longer distances.
LAN, WAN and Home Networking
Residences typically employ one LAN and connect to the Internet WAN via an
. The ISP provides a WAN
to the modem,
and all of the computers on the home network use LAN (so
called private) IP addresses. All
computers on the home LAN can communicate directly with each other but must go
central gateway, typically a
, to reach the ISP.
Other Types of Area Networks
While LAN and WAN are by far the most popular network types mentioned, you may also
commonly see references to these others:
Wireless Local Area Network
a LAN based on
wireless network technology
Metropolitan Area Network
a network spanning a physical area larger than a LAN but
smaller than a WAN, such as a city. A MAN is typically owned an operated by a single
entity such as a government body or large corporation.
Campus Area Network
a network spanning multiple LANs but smaller than a MAN,
such as on a university or local business campus.
Storage Area Network
connects servers to data storage devices through a technology
System Area Network
performance computers with high
in a cluster configuration. Also known as Cluster Area Network.
computing is a
structure that partitions tasks or
workloads between service providers, called
, and service requesters, called
clients and servers communicate over a
on separate hardware, but both client
and server may reside in the same system. A server machine is a host that is running one or more
server programs which share its resources with clients. A client does not share any of its
resources, but requests a ser
ver's content or service function. Clients therefore initiate
communication sessions with servers which await (listen to) incoming requests
characteristic describes the relationship of cooperating programs in an
application. The server c
omponent provides a function or service to one or many clients, which
initiate requests for such services.
such as email exchange, web access and database access, are built on the client
model. For example, a
is a client program running on a user's computer that may
access information stored on a web server on the Internet. Users accessing banking services from
their computer use a web browser client to se
nd a request to a web server at a bank. That
program may in turn forward the request to its own database client program that sends a request
to a database server at another bank computer to retrieve the account information. The balance is
returned to the b
ank database client, which in turn serves it back to the web browser client
displaying the results to the user.
Comparison to peer
s, each host or instance of the program can simultaneously act as both
a client and a server, and each has equivalent responsibilities and status.
server and peer
peer architectures are in wide usage today. Details may be found
Comparison of Centralized (Client
Server) and Decentralized (Peer
In most cases, a client
server architecture enables the roles and responsibilities of a
computing system to be distributed among several independent computers that are known
to each other only through a network. This creates an additional advantage to this
architecture: greater ease of maintenance. For example, it is possible to replace, rep
upgrade, or even relocate a server while its clients remain both unaware and unaffected
by that change.
All data is stored on the servers, which generally have far greater security controls than
Servers can better control access and resources, to guarantee that only those
clients with the appropriate permissions may access and change data.
Since data storage is centralized, updates to tha
t data are far easier to administer in
comparison to a P2P paradigm. In the latter, data updates may need to be distributed and
applied to each peer in the network, which is both time
consuming and error
there can be thousands or even millions of
Many mature client
server technologies are already available which were designed to
ensure security, friendliness of the user interface, and ease of use.
It functions with multiple different clients of different capabilities.
number of simultaneous client requests to a given server increases, the server can
become overloaded. Contrast that to a P2P network, where its aggregated bandwidth
actually increases as nodes are added, since the P2P network's overall bandwidth can be
ghly computed as the sum of the bandwidths of every node in that network.
server paradigm lacks the robustness of a good P2P network. Under client
server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks,
ources are usually distributed among many nodes. Even if one or more nodes depart
and abandon a downloading file, for example, the remaining nodes should still have the
data needed to complete the download.
Introduction to 2
ecture is used to describe client/server systems where the client requests resources
and the server responds directly to the request, using its own resources. This means that the
server does not call on another application in order to provide part of the s
Introduction to 3
tier architecture, there is an intermediary level, meaning the architecture is generally split up
A client, i.e. the
, which requ
ests the resources, equipped with a user interface
The application server (also called
), whose task it is to provide the request
resources, but by calling on another server
The data server, which provides the application server with the data it requires
Comparing both types of architecture
tier architecture is therefore a client
server architecture where the server is versatil
e, i.e. it is
capable of directly responding to all of the client's resource requests.
tier architecture however, the server
level applications are remote from one another, i.e.
each server is specialised with a certain task (for example: web server/
database server). 3
A greater degree of flexibility
Increased security, as security can be defined for each service, and at each level
Increased performance, as tasks are shared between servers
tier architecture, each server (tier 2 and 3) performs a specialised task (a service). A server
can therefore use services from other servers in order to provide its own service. As a result, 3
tier architecture is potentially an n
Programming languages have evolved tremendously since early 1950's and this evolution has resulted
in over hundreds of different languages being invented and used in the industry. This revolution is
needed as we can now instruct com
puters more easily and faster than ever before due to technological
advancement in hardware with fast processors like the 200MHz Pentium Pro developed by Intel®. The
increase in quantities and speed powerful computers are being produced now, more capable o
handling complex codes from different languages from this generation, like Appware and PROLOG, will
prompt language designers to design more efficient codes for various applications. This article will be
going down memory lane to look at past five genera
tions of languages and looking at how they
revolutionise the computer industry.
Generation of Languages
We start out with the first and second generation languages during the period of 1950
to many experienced programmers will say are machine and
assembly languages. Programming
language history really began with the work of Charles Babbage in the early nineteenth century
who developed automated calculation for mathematical functions. Further developments in early
1950 brought us machine language w
ithout interpreters and compilers to translate languages.
code is an example of the first generation language residing in the CPU written for doing
multiplication or division. Computers then were programmed in binary notation that was very
prone to e
rrors. A simple algorithm resulted in lengthy code. This was then improved to
mnemonic codes to represent operations.
Symbolic assembly codes came next in the mid 1950's, the second generation of programming
language like AUTOCODER, SAP and SPS. Symbolic a
ddresses allowed programmers to
represent memory locations, variables and instructions with names. Programmers now had the
flexibility not to change the addresses for new locations of variables whenever they are modified.
This kind of programming is still
considered fast and to program in machine language required
high knowledge of the CPU and machine's instruction set. This also meant high hardware
dependency and lack of portability. Assembly or machine code could not run on different
machines. Example, co
de written for the Intel® Processor family would look very different for
code written for the Motorola 68X00 series. To convert would mean changing a whole length of
Throughout the early 1960's till 1980 saw the emergence of the third generation prog
languages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are
examples of this and were considered as high level languages. Most of this languages had
compilers and the advantage of this was speed. Independence was another factor
languages were machine independent and could run on different machines. The advantages of
high level languages include the support for ideas of abstraction so that programmers can
concentrate on finding the solution to the problem rapidly, rather
than on low
level details of
data representation. The comparative ease of use and learning, improved portability and
simplified debugging, modifications and maintenance led to reliability and lower software costs.
These languages were mostly created follow
ing von Neumann constructs which had sequential
procedural operations and code executed using branches and loops. Although the syntax between
these languages were different but they shared similar constructs and were more readable by
programmers and users
compared to assembly languages. Some languages were improved over
time and some were influenced by previous languages, taking the desired features thought to be
good and discarding unwanted ones. New features were also added to the desired features to
the language more powerful.
COBOL (COmmon Business
Oriented Language), a business data processing language is an
example of a language constantly improving over the decades. It started out with a language
called FLOWMATIC in 1955 and this language influen
ced the birth of COBOL
60 in 1959.
Over the years, improvements were done to this language and COBOL 61, 65, 68, 70 were
developed, being recognised as a standard in 1961. Now the new COBOL 97 has included new
features like Object Oriented Programming to k
eep up with current languages. One good
possible reason for this is the fact that existing code is important and to totally develop a new
language from start would be a lengthy process. This also was the rationalisation behind the
developments of C and C++
Then, there were languages that evolved from other languages like LISP1 developed in 1959 for
artificial intelligence work, evolving into the 1.5 version and had strong influences languages
like MATHLAB, LPL and PL/I. Language like BALM had the combined
influence of ALGOL
60 and LISP 1.5. These third generation languages are less processor dependent than lower level
languages. An advantage in languages like C++ is that it gives the programmers a lot of control
over how things are done in creating applica
tions. This control however calls for more in depth
knowledge of how the operating system and computer works. Many of the real programmers
now still prefer to use these languages despite the fact the programmer having to devote a
substantial professional e
ffort to the leaning of a new complicated syntax which sometimes have
little relation to human
language syntax even if it is in English.
Third generation languages often followed procedural code, meaning the language performs
functions defined in specific
procedures on how something is done. In comparison, most fourth
generation languages are nonprocedural. A disadvantage with fourth generation languages was
they were slow compared to compiled languages and they also lacked control. Powerful
languages of th
e future will combine procedural code and nonprocedural statements together
with the flexibility of interactive screen applications, a powerful way of developing application.
Languages specifying what is accomplished but not how, not concerned with the det
procedures needed to achieve its target like in graphic packages, applications and report
generators. The need for this kind of languages is in line with minimum work and skill concept,
point and click, programmers who are end users of software appli
cations designed using third
generation languages unseen by the commercial users. Programmers whose primary interests are
programming and computing use third generation languages and programmers who use the
computers and programs to solve problems from oth
er applications are the main users of the
fourth generation languages.
Features evident in fourth generation languages quite clearly are that it must be user friendly,
portable and independent of operating systems, usable by non
programmers, having intelli
default options about what the user wants and allowing the user to obtain results fasts using
minimum requirement code generated with bug
free code from high
(employing a data
base and dictionary management which makes applications e
asy and quick to
change), which was not possible using COBOL or PL/I. Standardisation however, in early stages
of evolution can inhibit creativity in developing powerful languages for the future. Examples of
this generation of languages are IBM's ADRS2, AP
L, CSP and AS, Power Builder, Access.
The 1990's saw the developments of fifth generation languages like PROLOG, referring to
systems used in the field of
. This means
computers can in the future have the ability to think for themselves and draw their own
inferences using programmed information in large databases. Complex processes like
understanding speech would a
ppear to be trivial using these fast inferences and would make the
software seem highly intelligent. In fact, these databases programmed in a specialised area of
study would show a significant expertise greater than humans. Also, improvements in the fourth
generation languages now carried features where users did not need any programming
knowledge. Little or no coding and computer aided design with graphics provides an easy to use
product that can generate new applications.
What does the next generation of
languages hold for us? The sixth generation? That is pretty
uncertain at the moment. With fast processors, like in fifth generation computers, able to have
multiple processors operating in parallel to solve problems simultaneously will probably ignite a
ole new type of language being designed. The current trend of the Internet and the World
Wide Web could cultivate a whole new breed of radical programmers for the future, now
exploring new boundaries with languages like HTML and Java.
, an interpreter normally means a
performs, instructions written in a
. An interpreter
may be a program that
translates source code into some efficient intermediate representation (code) and
immediately executes this
executes stored precompiled code made by a compiler which is part of the
are examples of type 2, while
3: Source programs are compiled ahead of time and stored as machine independent code, which
time and executed by an
interpreter and/or compiler (for
systems, such as
, and o
thers, may also combine 2 and 3.
While interpretation and
are the two principal means by which programming
languages are implemented, these are not fully distinct categories, o
ne of the reasons being that
most interpreting systems also perform some translation work, just like compilers. The terms
" or "
" merely mean that the canonical implementation
of that language is an interpreter or a compiler; a high level language is basically an abstraction
which is (ideally) i
ndependent of particular implementations.
Advantages and disadvantages of using interpreters
Programmers usually write programs in
code which the CPU canno
t execute. So this
source code has to be converted into machine code. This conversion is done by a compiler or an
interpreter. A compiler makes the conversion just once, while an interpreter typically converts it
every time a program is executed (or in som
e languages like early versions of
, every time
a single instruction is executed).
During programs development the programmer makes frequent changes to source code. A
compiler needs to make a compilation of the altered source files, and
hole binary code
before the program can be executed. An interpreter usually just needs to translate to an
intermediate representation or not translate at all, thus requiring less time before the changes can
This often makes interpreted languages
generally easier to learn and find bugs and correct
problems. Thus simple interpreted languages tend to have a friendlier environment for beginners.
A compiler converts source code into binary instruction for a specific processor's architectu
thus making it less portable. This conversion is made just once, on the developer's environment,
and after that the same binary can be distributed to the user's machines where it can be executed
without further translation.
An interpreted program can b
e distributed as source code. It needs to be translated in each final
machine, which takes more time but makes the program distribution independent to the
An interpreter will make source translations during run
time. This means every line has to be
converted each time the program runs. This process slows down the program execution and is a
major disadvantage of interpreters over compilers. Another main disadvantage of interpreter is
that it must be present on the
machine as additional software to run the program.
Abstract Syntax Tree interpreters
In the spectrum between interpreting and compiling, another approach is transforming the source
code into an optimized
Abstract Syntax Tree
(AST) then executing the program following this
tree structure. In this approach, each sentence needs to be parsed just once. As an advantage over
bytecode, the AST keeps the global program stru
cture and relations between statements (which is
lost in a bytecode representation), and provides a more compact representation.
Thus, AST has been proposed as a better intermediate format for Just
time compilers than
bytecode. Also, it allows to perfor
m better analysis during runtime. An AST
interpreter has been proved to be faster than a similar bytecode
based interpreter,due to the
powerful optimizations allowed by having the complete structure of the program, as well as
higher level typing, available during execution.
Further blurring the distinction between interpreters, byte
code interpreters and comp
(or JIT), a technique in which the intermediate representation is
compiled to native
at runtime. This confers the efficiency of running native code,
at the cost of startup time and increased memory use when the bytecode or AST is first compiled.
is a complementary technique in which the interpreter profiles the running
program and compiles its most frequently
executed parts into native code. Both techniques are a
s old, appearing in languages such as
in the 1980s.
time compilation has gained mainstream attention amongst language implementers in
recent years, with
all now including JITs.
Punched card interpreter
The term "interpreter" often referred to a piece of
that could read
and print the characters in human
readable form on the card. The
Alphabetic Interpreter are typical examples from 1930 and 1954,
are a type of
, and other (usually)
. They implement a
symbolic representation of the numeric
and other constants needed to program a
architecture. This representation is usually defined by the hardware
manufacturer, and is based on abbreviations (called
) that help the programmer
, etc. An assembly language family is thus specific to
a certain physical (or virtual) computer architecture. This is in contrast to most
, which are (ideally)
is used to translate assembly language statements into the
target computer's machine code. The assembler performs a more or less
statements into machine instructions and data. This is in
, in which a single statement generally results in many
Many sophisticated assemblers offer additional mechanisms to facilitate program development,
control the assembly process, and aid
. In particular, most modern assemblers include
facility (described below
), and are called
Typically a modern
by translating assembly instruction
, and by resolving
for memory locations and other
The use of symbolic references is a key feature of assemblers, saving tedious
calculations and man
ual address updates after program modifications. Most assemblers also
facilities for performing textual substitution
e.g., to generate com
sequences of instructions as
, instead of
, or even generate entir
programs or program suites.
Assemblers are generally simpler to write than
, and have
been available since the 1950s. Modern assemblers, especially for
, and HP
, as well as
to exploit the
There are two types of assemblers based on how many passes through the source are needed to
produce the executable program.
pass assemblers go through the source code once and assumes that all s
be defined before any instruction that references them.
pass assemblers (and multi
pass assemblers) create a table with all unresolved
symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage
of a one
assembler is speed, which is not as important as it once was with advances
in computer speed and capabilities. The advantage of the two
pass assembler is that
symbols can be defined anywhere in the program source. As a result, the program can be
a more logical and meaningful way. This makes two
pass assembler programs
easier to read and maintain.
provide language ab
stractions such as:
Advanced control structures
level procedure/function declarations and invocations
level abstract data types, including structures/records, unions, classes, and sets
Sophisticated macro processing (although available on ordinar
y assemblers since late
, amongst other machines)
eatures such as
Note that, in normal professional usage, the term
is often used ambiguously: It is
frequently used to refer to an assembly la
nguage itself, rather than to the assembler utility. Thus:
"CP/CMS was written in
assembler" as opposed to "ASM
H was a widely
A program written in assembly language consists of a series of
correspond to a stream of executable instructions, when translated by an
, that can be
loaded into memory and executed.
For example, an
ocessor can execute the following binary instruction (
Hexadecimal: B0 61 (Binary: 10110000 01100001)
The equivalent assembly language representation is easier to remember:
Move (really a copy) the
value '61' into the
L". (The h
or = 97 in
The mnemonic "mov" represents the opcode
the value in the second
operand into the register indicated by the first operand. The mnemonic was chosen by the
designer of the instruction set to abbreviate "move", making it easier for the programmer to
remember. Typical of an ass
embly language statement, a comma
separated list of arguments or
parameters follows the opcode.
In practice many programmers drop the word
and, technically incorrectly, call "mov"
. When they do this they are referring to the underlying
represents. To put it another way, a mnemonic such as "mov" is not an opcode, but as it
symbolizes an opcode, one might refer to "the opcode mov" for example when
one intends to
refer to the binary opcode it symbolizes rather than to the symbol
few modern programmers have need to be mindful of actually what binary patterns are (the
opcodes for specific instructions), the distinction ha
s in practice become a bit blurred among
programmers but not among processor designers.
Transforming assembly into machine language is accomplished by an
, and the
(partial) reverse by a
is usually a
between simple assembly statements and machine language instructions.
However, in some cases, an assembler may prov
which expand into several machine language instructions to provide commonly needed
functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an
assembler may provide a pseud
oinstruction that expands to the machine's "set if less than" and
"branch if zero (on the result of the set instruction)". Most full
featured assemblers also provide a
language (discussed below) which is used by vendors and programmers to generate
more complex code and data sequences.
usually has its own machine language. On
this level, each instruction is simple enough to be executed using a relatively sm
all number of
electronic circuits. Computers differ by the number and type of operations they support. For
example, a new 64
bit machine would have different circuitry from a 32
bit machine. They may
also have different sizes and numbers of registers, and
different representations of data types in
storage. While most general
purpose computers are able to carry out essentially the same
functionality, the ways they do so differ; the corresponding assembly languages reflect these
Multiple sets of
language syntax may exist for a single instruction set,
typically instantiated in different assembler programs. In these cases, the most popular one is
usually that s
upplied by the manufacturer and used in its documentation.
(or set of programs) that transforms
written in a
) into another computer langu
often having a binary form known as
). The most common reason for wanting to
transform source code is to create an
The name "compiler" is primarily used for programs that translate source code from a
to a lower level language (e.g.,
program that translates from a low level language to a higher level one is a
program that translates between high
level languages is usually called a
source to source translator
is usually a program
that translates the form of expressions without a change of language.
A compiler is likely to perform many or all of the following operations:
, semantic analysis,
Program faults caused by
incorrect compiler behavior can be very difficult to track down and
work around and compiler implementors invest a lot of time ensuring the
correctness of their
is sometimes used to refer to a
tool often used to
help create the
Compilers enabled the development of progr
ams that are machine
independent. Before the
development of FORTRAN (FORmula TRANslator), the first higher
level language, in the
used. While assembly language
produces more reusable and relocatable programs than machine code on the same architecture, it
has to be modified or rewritten if the program is to be executed on different hardwares.
With the advance of high
el programming languages soon followed after FORTRAN, such as
COBOL, C, BASIC, programmers can write machine
independent source programs. A compiler
translates the high
level source programs into target programs in machine languages for the
res. Once the target program is generated, the user can execute the program.
Structure of compiler
Compilers bridges source programs in high
level languages with the underlying hardwares. A
compiler requires 1) to recognize legitimacy of programs, 2) to ge
nerate correct and efficient
code, 3) run
time organization, 4) to format output according to assembler or linker conventions.
A compiler consists of three main parts: frontend, middle
end, and backend.
checks whether the program is correctly writ
ten in terms of the programming language
syntax and semantics. Here legal and illegal programs are recognized. Errors are reported, if any,
in a useful way.
hecking is also performed by collecting type information. Frontend
generates IR (intermediate representation) for the middle
end. Optimization of this part is almost
complete so much are already automated. There are efficient algorithms typ
ically in O(n) or
is where the optimizations for performance take place. Typical transformations for
optimization are 1) removal of useless or unreachable code, 2) discovering and propagating
constant values, 3) relocation of computation to a less frequently executed place
(e.g., out of a
loop), 3) specializing a computation based on the context. Middle
end generates IR for the
following backend. Most optimization efforts are focused on this part.
is responsible for translation of IR into the target assembly code. T
instruction(s) are chosen for each IR instruction. Variables are also selected for the registers.
Backend utilizes the hardware by figuring out how to keep parallel FUs busy, filling delay slots,
and so on. Although most algorithms for optimizati
on are in
, heuristic techniques are well
One classification of compilers is by the
on which their generated code executes. This is
known as the
compiler is one whose output is intended to directly run
on the same type of
computer and operating system that the compiler itself runs on. The output of a
designed to run on a different platform. Cross compilers
are often used when developing
that are not intended to support a software development
The output of a compiler that produces code
(VM) may or may not be
executed on the same platform as the compiler that produced it. For this reason such compilers
are not usually classified as nati
ve or cross compilers.
Compiled versus interpreted languages
level programming languages are generally divided for convenience into
. However, in practice there is rarely anything about a
it to be exclusively compiled or exclusively interpreted, although it is
possible to design languages that rely on re
interpretation at run time. The categorization usually
reflects the most popular or widespread implementations of a language
for instance, BASIC is
sometimes called an interpreted language, and C a compiled o
ne, despite the existence of BASIC
compilers and C interpreters.
Modern trends toward
at times blur the
traditional categorizations of compilers and interpreters.
Some language specifications spell out that implementations
include a compilation facility;
. However, there is nothing inherent in the definition of Common
Lisp that stops it from being interpreted. Other languages have features that are very easy to
implement in an interpreter, but make writing a compiler much harder; for example,
, and many scripting languages allow programs to construct arbitrary source code at
runtime with regular string operations, and then execute that code by passing it to a special
evaluation function. To implement these features in a compiled la
nguage, programs must usually
be shipped with a
that includes a version of the compiler itself.
The output of some compilers may target
at a very low level, for example a
Programmable Gate Array
) or structured
specific integrated circuit
Such compilers are said to be
or synthesis tools because the programs they
compile effectively control the final configuration of the hardware and how it operates; the
output of the compilation are not instruction
s that are executed in sequence
interconnection of transistors or lookup tables. For example, XST is the Xilinx Synthesis Tool
used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and
In the early days, the approach taken to compiler design used to be directly affected by the
complexity of the processing, the experience of the person(s) designing it, and the resources
A compiler for a relatively simple language written
by one person might be a single, monolithic
piece of software. When the source language is large and complex, and high quality output is
required the design may be split into a number of relatively independent phases. Having separate
phases means developme
nt can be parceled up into small parts and given to different people. It
also becomes much easier to replace a single phase by an improved one, or to insert new phases
later (eg, additional optimizations).
The division of the compilation processes into pha
ses was championed by the
University. This project introduced the
All but the smallest of compilers have more than two phases. However, these phases are usually
regarded as bei
ng part of the front end or the back end. The point at where these two
always open to debate. The front end is generally considered to be where syntactic and semantic
processing takes place, along with translation to a lower level of representation (than source
The middle end is usually designed to perform opt
imizations on a form other than the source
code or machine code. This source code/machine code independence is intended to enable
generic optimizations to be shared between versions of the compiler supporting different
languages and target processors.
back end takes the output from the middle. It may perform more analysis, transformations
and optimizations that are for a particular computer. Then, it generates code for a particular
processor and OS.
end approach makes it possi
ble to combine front ends for different
with back ends for different
. Practical examples of
this approach are the
, and the
Amsterdam Compiler Kit
, which have multiple front
shared analysis and multiple back
pass versus multi
Classifying compilers by number of passes has its background in
the hardware resource
limitations of computers. Compiling involves performing lots of work and early computers did
not have enough memory to contain one program that did all of this work. So compilers were
split up into smaller programs which each made a
pass over the source (or some representation of
it) performing some of the required analysis and translations.
The ability to compile in a
has classically bee
n seen as a benefit because it simplifies
the job of writing a compiler and one pass compilers generally compile faster than
. Thus, partly driven
by the resource limitations of early systems, many early languages
were specifically designed so that they could be compiled in a single pass (e.g.,
In some cases the design of a language feature may require a compiler to perform more than one
pass over the source. For instance, consider a declaration appearing on line 20 of the source
which affects the translation of a statement appea
ring on line 10. In this case, the first pass needs
to gather information about declarations appearing after statements that they affect, with the
actual translation happening during a subsequent pass.
The disadvantage of compiling in a single pass is that
it is not possible to perform many of the
needed to generate high quality code. It can be difficult to count
exactly how many passes
an optimizing compiler makes. For instance, different phases of
optimization may analyse one expression many times but only analyse another expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in
ing provably correct compilers. Proving the correctness of a set of small programs often
requires less effort than proving the correctness of a larger, single, equivalent program.
While the typical multi
pass compiler outputs machine code from its final pa
ss, there are several
" is a type of compiler that takes a high level language as its
input and outputs a high level language. For example, an
will frequently take in a high level language program as an input and then transform the
code and annotate it with parallel code annotations (e.g.
) or language constructs
that compiles to assembly language of a theoretical machine, like some
This Prolog machine is also known as the
Warren Abstract Machine
Bytecode compilers for Java,
, and many more are also a subtype of this.
, used by Smalltalk and Java systems, and also by Microsoft .Net's
Common Intermediate Language
delivered in bytecode, which is compiled to native machine code
just prior to execution.
The front end analyzes the source code to build an internal representation of the program, called
. It also manages the
, a data structure mapping
each symbol in the source code to
associated information such as location, type and scope. This
is done over several phases, which includes some of the following:
. Languages which
their keywords or allow arbitrary spaces
within identifiers require a phase before parsing, which converts the input character
sequence to a canonical form ready for the parser. The
driven parsers used in the 1960s typically read the source one character at a time
and did not require
a separate tokenizing phase.
) are examples of stropped languages whose
compilers would have a
breaks the source code text into small pieces called
. Each token is
a single atomic unit of the language, for instance a
The token syntax is typically a
, so a
can be used to recognize it. This phase is also called lexing or
scanning, and the software doing lexical analysis is ca
. Some languages, e.g.,
, require a preprocessing phase which supports
conditional compilation. Typically the preprocessing phase occurs
before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates
lexical tokens rather than syntactic forms. However, some languages such as
support macro substitutions based on syntactic forms.
the token sequence to identify the syntactic structure of
the program. This phase typically builds a
, which replaces the linear sequence
of tokens with a tree structure built according to the rules of a
define the language's syntax. The parse tree is often analyzed, augmented, and
transformed by later phases in the compiler.
Semantic analysis is the phase in which the compiler adds semantic information to the
and builds the symbol table. This phase performs semantic checks such as
(checking for type errors), or
(associating variable and function
references with their definitions), or
(requiring all local variables to
be initialized before use), rejecting incorrect programs or issuing warnings. Semantic
analysis usually requires a complete parse tree, meaning that this phase logically follows
phase, and logically precedes the
phase, though it is often
possible to f
old multiple phases into one pass over the code in a compiler
is sometimes confused with
ecause of the overlapped
functionality of generating assembly code. Some literature uses
to distinguish the
generic analysis and optimization phases in the back end from the machine
The main phases of the back end incl
ude the following:
: This is the gathering of program information from the intermediate
representation derived from the input. Typical analyses are
data flow analysis
etc. Accurate analysis is the basis for any compiler optimization. The
control flow graph
are usually also built during the analysis phase.
: the intermediate language representation is transformed into functionally
equivalent but faster (or smaller) forms. Popular optimizations are
: the transformed intermediate language is t
ranslated into the output
language, usually the native
of the system. This involves resource and
storage decisions, such as deciding which variables to fit into registers and memory and
the selection and scheduling of appropriate machine instructions along with their
associated addressing modes (see also
Compiler analysis is the prerequisite for any compiler optimization, and they tightly work
together. For example,
is crucial for
In addition, the scope of compiler analysis a
nd optimizations vary greatly, from as small as a
to the procedure/function level, or even over the whole program (
). Obviously, a compiler can potentially do a better job using a broader view. But
that broad view is not free: large scope analysis and optimizations are very costly in terms
compilation time and memory space; this is especially true for interprocedural analysis and
Due to the extra time and space needed for compiler analysis and optimizations, some compilers
skip them by default. Users have to use compilatio
n options to explicitly tell the compiler which
optimizations should be enabled.
Introduction to 4GLS
generation programming language
) is a
or programming environment designed with a specific purpose in mind,
such as the development of commercial business software.
history of computer science
the 4GL follow
in an upward trend toward higher abstraction and statement power.
The 4GL was followed by efforts to define and use a
structured mode of the
generation programming languages
improved the process of software development. However,
development methods can be
slow and error
prone. It became clear that some applications could be developed more rapidly by
adding a higher
level programming language and methodology which would generate the
equivalent of very complicated
3GL instructions with fewer errors. In some senses,
arose to handle
development. 4GL and
projects are more oriented toward
All 4GLs are designed to reduce programming effort, the t
ime it takes to develop software, and
the cost of software development. They are not always successful in this task, sometimes
resulting in inelegant and unmaintainable code. However, given the right problem, the use of an
appropriate 4GL can be spectacula
rly successful as was seen with
History Section, S
anta Fe real
time tracking of their freight cars
the productivity gains were
estimated to be 8 times over
improvements obtained by some 4GLs (and
their environment) allowed better exploration for
solutions than d
A quantitative definition of 4GL has been set by
, as part of his work on
. Jones defines the various generations of programming languages in terms of
productivity, measured in function points per staff
month. A 4GL is defined as a
language that supports 12
20 FP/SM. This correlates with about 16
27 lines of code per function
point implemented in a 4GL.
generation languages have often been compare
(DSLs). Some researchers state that 4GLs are a subset of DSLs. Given the persistence
in advanced development environments (MS Studio), one
expects that a system ought to be a mixture of all the generations,
with only very limited use of
A number of different types of 4GLs exist:
, usually running with a runtime framework and
libraries. Instead of using code, the developer defines his logic by selecting an operation
in a pre
defined list of memory or data table manipulation commands. In other words,
coding, the developer uses
programming (See also
that can be used for this purpose). A good example of this type of 4GL
. These types of tools can be used for business application
development usually consisting in a package allowing for both business data
manipulation and reporting, therefore they come with GUI screens and report editors.
ually offer integration with lower level DLLs generated from a typical 3GL for
when the need arise for more hardware/OS specific operations.
generator programming la
take a description of the data format and the
report to generate and from that they either generate the required report directly or they
generate a program to generate the report. See also
manage online interactions with the
application system users
or generate programs to do so.
More ambitious 4GLs (sometimes termed
fourth generation environments
) attempt to
automatically generate whole systems from the outputs of
tools, specifications of
screens and reports, and possibly also the specification of some additional processing
4GLs such as
provide sophisticated coding
for data manipulation, file reshaping, case selection and
in the preparation of data for
Some 4GLs have integrated tools which allow for the easy specification of all
James Martin's own
automated to allow the input of the results of system analysis and design in the form of
data flow diagrams
entity relationship diagrams
entity life history diagrams
which hundreds o
f thousands of lines of
would be generated overnight.
Oracle Developer Suite
products could be integrated to produce database definitions and the forms and reports
Applications in Entertainment
Our robots are designed to suit interdisciplinary research by people who are not necessarily
familiar with robotics (our us
ers demonstrate the wide range of use of our products). This also
opens up possibilities for other non
research domains (e.g. Internet, arts, advertising) to use our
robots for other creative purposes.
Entertainment is a new growing domain for mobile robo
tics. This domain ranges from domestic
toys to public installations or performances. K
Team has been very active in this domain and
participated in several international collaborations.
Entertainment Applications and User Examples
Our robots have alread
y been used in several entertainment projects, due to the show that a robot
Khepera has already been used to animate a display at the train station in Lausanne:
Animation of Exhibitions
Khepera acting with the correct
behaviour and the good environment can bring the
attention of the visitors. For instance look to the
project, where Khepera is
installed in an environement that creates music based on the ex
periences and "emotions"
of the robot as it explores its environment.
honorary mention for interactive arts
Ars Electronica 99 for the project Robots+Avatars, dreaming wit
h Virtual Illusions.
The "pot" is a good example of sculpture moving on the roof of a school. This creation
has a Kameleon board as core.
is the use of
, and other
industrial control systems
), in concert with other a
[CAD, CAM, CAx]), to control
, reducing the need for human intervention. In the scope of
, automation is a step beyond
human operators with machinery to assist them with the
requirements of work,
greatly reduces the need for human
requirements as well.
Processes and systems can also be automated.
Automation plays an increasingly important role in the
and in daily experience.
Engineers strive to combine automated devices with mathematical and organizational tools to
create complex systems for a rapidly expanding range of applications and human activities.
Many roles for humans in industrial processes p
resently lie beyond the scope of automation.
, and language production ability are well
beyond the capabilities of modern mechanical and computer systems. Tasks requiring subjective
assessment or synthesis of complex sensory data, such as scents and sounds, as well as high
tasks such as strategic planning, currently require human expertise. In many cases, the use of
humans is more cost
effective than mechanical approaches even where automation of industrial
tasks is possible.
Specialised hardened computers, referred to as
programmable logic controllers
frequently used to synchronize the flow of inputs from (physical)
and events with the
flow of outputs to actuators and events. This leads to precisely controlled actions that permit a
tight control of almost any
computer human interfaces
(CHI), formerly known as
, are usually employed to communicate with PLCs and other computers, such
as entering and monitoring
or pressures for further automated control or emergency
response. Service personnel who monitor and control these interfaces are often referred to as
stationary engineers in boiler houses or central utilities departments
In most i
and manufacturing environments, these roles are called operators or variations on this.
is the application of science and technology to predict the state of the
for a future time and a given location. Human beings have attempted to predict the
weather informally for millennia, and formally since at least the nineteenth century. Weather
forecasts are m
ade by collecting quantitative
about the current state of the atmosphere and
scientific understanding of
to project how the atmosphere will
Once an all
human endeavor based mainly upon changes in
, current weather
ions, and sky condition,
are now used to determine future conditions.
Human input is still required to pick the best possible foreca
st model to base the forecast upon,
which involves pattern recognition skills,
, knowledge of model performance, and
knowledge of model biases. The
nature of the atmosphere, the massive computational
power required to solve the equations that describe the atmosphere, error involved in measuring
the initial conditions, and an incomp
lete understanding of atmospheric processes mean that
forecasts become less accurate as the difference in current time and the time for which the
forecast is being made (the
of the forecast) increases. The use of ensembles and model
consensus help na
rrow the error and pick the most likely outcome.
There are a variety of end uses to weather forecasts. Weather warnings are important forecasts
because they are used to protect life and property. Forecasts based on
are important to
, and therefore to traders within commodity markets.
Temperature forecasts are used by utility companies to estimate demand over coming days. On
an everyday basis, people use weather forecasts to determine what to wear on a given day. Si
outdoor activities are severely curtailed by heavy rain, snow and the
, forecasts can be
used to plan activities around these events, and to plan ahead and survive them.
NIC using state
in view the wide geographic spread of
ranging from Islands in the Indian Ocean to the highest Himalayan
design of NIC NET, which is
the largest VSAT Networks of its kind in the world,
ensures extremely cost effective and reliable implementation.
In Mahe, NICNET
operational since 1994 and
has become an
Office of the Regional Administrator and large no of Government Departments,
services. NICNET services
file transfer, electronic mail,
data broadcast, EDI. The installation Direc
t PC & FTDMA technology started
the internet facility in NIC, Mahe which helps general public to access important information
quickly especially examination result and competitive exam results.
In this era of globalization and hyper
the concept of teaching has under gone sea
change. Learning and dissemination of information are becoming more important. Internet based
education and e
learning are the trends of the day.
In what it is described, as the first effort of its kind in the c
ountry, the then Department of
Electronics had initiated a project "ERNET" with the funding from UNDP. The objective was to
create expertise R&D and education in the country in the area of networking and Internet in the
country. ERNET is dedicated to the a
bove objective for the last 15 years.
Today ERNET is largest nationwide terrestrial and satellite network with point of presence
located at the premiere educational and research institutions in major cities of the country. Focus
of ERNET is not limited t
o just providing connectivity, but to meet the entire needs of the
educational and research institutions by hosting and providing relevant information to their users.
Research and Development and Training are integral parts of ERNET activities.