INTRODUCTION OF IT - ANSWER

redlemonbalmΚινητά – Ασύρματες Τεχνολογίες

10 Δεκ 2013 (πριν από 3 χρόνια και 10 μήνες)

68 εμφανίσεις

5 marks..

1.Information technology

Information technology

(
IT
) is the branch of engineering that deals with the use of computers to store,
retrieve and transmit information.
[1]

The acquisition, processing, storage and dissemination of vocal,
pictorial, textual and numerical information by a
microelectronics
-
based combination of
computing

and
telecommunications

are its main fields.
[2]

The term in its modern sense first appeared in a 1958 article
published in the
Harvard Business Review
, in which authors Leavit
t and Whisler commented that "the
new technology does not yet have a single established name. We shall call it information technology
(IT)."
[3]

Some of the modern and emer
ging fields of Information technology are next generation
web
technologies
,
bioinformatics
,
cloud computing
, global information systems, large scale
knowledge bases
,
etc. Advancements are main
ly driven in the field of
computer science
.

Information

Main article:
Information

The English word

was apparently derived from the
Latin

stem (
information
-
) of the nominative
(
informatio
): this noun is in its turn derived from the verb "informare" (to inform) in the sense of
"to give form to

the mind", "to discipline", "to instruct", "to teach".

[
edit
]

Technology

Main article:
Technology



Information and communication technology spending in 2005

IT is
the area of managing technology and spans a wide variety of areas that include
computer
software
,
information systems
,
computer hardware
,
programming languages

but are not limited
to things such as processes, and data constructs. In short, anything that renders data, information
or perceived knowledge in any visual format whatsoever, via any multimedia distribution
mechanism, is considered part of the IT domain
. IT provides businesses with four sets of core
services to help execute the business strategy: business process automation, providing
information, connecting with customers, and productivity tools.

IT professionals perform a variety of functions that rang
e from installing
applications

to
designing complex
computer networks

and inform
ation
databases
. A few of the duties that IT
professionals perform may include
data management
, networking
, engineering
computer
hardware
, server management, database and software design, as well as management and
administration of entire systems.

In the recent past, the
Accreditation Board for Engineering and Technology

and the
Association
for Computing Machinery

have collaborated to form accreditation and curriculum standards
[4]

for degrees in Information Technology as a distinct field of study as compared
[5]

to
Computer
Science

and
Information Systems

today. SIGITE (Special Interest Group for IT Education)
[6]

is
the ACM working group for defining these standards. The Worldwide IT services revenue
totaled $763 billion in 2009.
[7]

[
edit
]

Technological capacity and growth

Hilbert and Lopez
[8]

identify the exponential pace of technological change (a kind of
Moore's
law
): machines’ application
-
specific capacity to compute
information per capita has roughly
doubled every 14 months between 1986
-
2007; the per capita capacity of the world’s general
-
purpose computers has doubled every 18 months during the same two decades; the global
telecommunication capacity per capita doubled

every 34 months; the world’s storage capacity
per capita required roughly 40 months to double (every 3 years); and per capita broadcast
information has doubled roughly every 12.3 years.
[

2.

Information and communications
technology

From Wikipedia, the free encyclopedia

Jump to:
navigation
,
search




Spending on informati
on and communications technology in 2005

Information and Communications Technology

or
Information and Communication
Technology
,
[1]

usually abbreviated a
s
ICT
, is often used as an extended synonym for
information technology

(IT), but is usually a more general term that stresses the role of
unified
communications
[2]

and the integration of
telecommunications

(
telephone

lines and wireless
signals), computers,
middleware

as well as necessary software, storage, and audio
-
visual
systems, which enable users to create, access, store, transmit, and manipulate information. In
other words, ICT consists of IT as well as
telecommunication
,
broadcast

media, all types of
audio and video processing and transmission and network based control and mon
itoring
functions.
[3]

The expression was first used in 1997
[4]

in a report by Dennis Stevenson to the UK government
[5]

and promoted by the new National Curriculum documents for the UK in 2000.

The ter
m
ICT

is now also used to refer to the merging (
convergence
) of audio
-
visual and
telephone networks

with
computer networks

through a single cabling or link system. There are
large economic incentives (huge cost savings due

to elimination of the telephone network) to
merge the audio
-
visual, building management and telephone network with the computer network
system using a single unified system of cabling, signal distribution and management. This in turn
has spurred the growt
h of organizations with the term ICT in their names to indicate their
specialization in the process of merging the different network systems.

Some of the dangers of ICTs include
cyber
-
bullying
,
phishing

as well as
masquerading
.

Trends and new concepts in the ICT r
oadway

ICT

is often used in the context of "ICT roadmap" to indicate the path that an organization will
take with their ICTl.
[6]

It is also used as an o
verarching term in many schools, universities and
colleges stretching from Information Systems/Technology at the organisational end through to
Software Engineering and Computer Systems Engineering at the other.
[7]

[
edit
]

Standards

Standards are very importa
nt for ICT, since they define the language that enables the technology
to understand each other. This is especially relevant because the key idea behind ICT is that
information storage devices can communicate in media
-
frictionless manner with communication

networks and computing systems.
Open standards

play a special role, as well as standards
organizations such as the
Telecommunications Industry Association

in the United States and
ETSI

in Europe.

[
edit
]

Campaigns and Projects

Recently, many companies are coming up with campaigns to promote ICT and how it can be
utilised to protect
nature. Wipro was a leading company that organized Earthian, wherein
students came up with ideas to use technology to benefit society.

UNESCO
's
Information for All Program

(IFAP) provides a platform for discussion of and action
on ethical, legal and societal consequences of ICT developments.

3.

Telecommunications netwo
rk

A
telecommunications network

is a
collection

of
terminals
,
links

and
nodes

which connect to
enable
telecommunication

between users of the terminals. Networks may use
circuit switching

or
message switching
. Each terminal in the network must have a unique
address

so messages or
connections can be route
d to the correct recipients. The collection of addresses in the network is
called the
address space
.

The links connect the nodes together and are themselves built upon an underly
ing
transmission
network

which physically pushes the message across the link.

Examples of telecommunications networks are:



computer networks



the
Internet



the
telephone network



the global
Telex

network



the aeronautical
ACARS

network

Messages and protocols



Example of how nodes may be interconnected with links to form a telecommunications network

Messages are generated b
y a sending terminal, then pass through the network of links and nodes
until they arrive at the destination terminal. It is the job of the intermediate nodes to handle the
messages and
route

them down the correct link toward their final destination.

These messages consist of
control (or signaling)

and bearer parts which can be s
ent together or
separately. The bearer part is the actual content that the user wishes to transmit (e.g. some
encoded speech, or an email) whereas the control part instructs the nodes where and possibly
how the message should be routed through the network.

A large number of
protocols

have been
developed over the years to specify how each different type of telecommunication network
should handle the control and bearer messages to

achieve this efficiently..

[
edit
]

Components

All telecommunication networks are made up of five basic components that
are present in each
network environment regardless of type or use. These basic components include terminals,
telecommunications processors, telecommunications channels, computers, and
telecommunications control
software
.



Terminals

are the starting and stopping points in any telecommunication network environment.
Any input or outpu
t device that is used to transmit or receive data can be classified as a terminal
component.
[1]



Telecommunications processors support
data transmission

and reception between terminals
and computers by providing a variety of control and support functions. (i.e. convert data from
digital to analog and back)
[1]



Telecommunications channels

are the way by which data is transmitted and received.
Telecommunication channels are created through a variety of media of which the most popular
include
copper wires

and coaxial cables (
structured cabling
).
Fiber
-
optic cables

are increasingly
used to bring faster and more robust connections to businesses and homes.
[1]



In a telecommunication environment computers are connected through media to perform their
communication assignments.
[1]



Telecommunications control software is present on all networked computers and is responsible
for controlling network activities and functionality.
[1]

Early networks were built without computers, but late in the 20th century their
switching centers

were computerized or t
he networks replaced with computer networks.

[
edit
]

Network structure

In general, every telecommunications netwo
rk conceptually consists of three parts, or planes (so
called because they can be thought of as being, and often are, separate
overlay networks
):



The
control plane

carries control information (also known as
signalling
).



The
data plane

or
user plane

or
bearer plane

carries the network's users traffic.



The
management plane

carries the operations and administration traffic required for
network
management
.

20 marks..

1.

Software development
process

From Wikipedia, the free encyclopedia

Jump to:
navigation
,
search



This article
needs additional
citations

for
verification
.
Please help
improve this article

by
adding citations to
reliable sources
. Unsourced material may be
challenged

and
removed
.

(December 2010)


Software deve
lopment process

Activities and steps



Requirements



Specificati
on



Architecture



Design



Implementation



Testing



Debugging



Deployment



Maintenance

Methodologies



Waterfall



Prototype model



Incremental



Iterative



V
-
Model



Spiral



Scrum



Cleanroom



RAD



DSDM



RUP



XP



Agile



Lean



Dual Vee Model



TDD

Supporting disciplines



Configuration management



Documentation



Quality assurance (SQA)



Project management



User experience design

Tools



Compiler



Debugger



Profiler



GUI designer



IDE



Build automation



v



t



e

A
software development process
, also known as a
software development life cycle (SDLC)
, is
a structure imposed on
the
development of a software product
. Similar terms include
software
life cycle

and
software process
. It is often considered a subset of
systems development life cycle
.
There are several
models

for such processes, each describing approaches to a variety of
tasks or
activities

that take pla
ce during the process. Some people consider a life
-
cycle model a more
general term and a software development process a more specific term. For example, there are
many specific software development processes that 'fit' the spiral life
-
cycle model.
ISO/IEC
12207

is an international standard for software life
-
cycle processes. It aims to be the standard
that defines all the tasks required for developing and maintaining software.

Cont
ents


[
hide
]




1

Overview



2

Software development activities


o

2.1

Planning

o

2.2

Implementation, testing and documenting

o

2.3

Deployment and maintenance



3

Software development models


o

3.1

Waterfall model

o

3.2

Spiral model

o

3.3

Iterative and incremental development

o

3.4

Agile development

o

3.5

Code and fix



4

Process improvement models



5

Formal methods



6

See also


o

6.1

Development methods

o

6.2

Related subjects



7

Bibliography



8

References



9

External links

[
edit
]

Overview

The large and
growing body of
software development

organizations implement process
methodologies. Many of them are in the
defense industry
, which in the
U.S.

requires a rating
based on '
process models
' to obtain contra
cts.

The international standard for describing the method of selecting, implementing and monitoring
the life cycle for software is ISO/IEC 12207.

A decades
-
long goal has been to find repeatable, predictable processes that improve productivity
and quality.
Some try to systematize or formalize the seemingly unruly task of writing software.
Others apply project management techniques to writing software. Without project management,
software projects can easily be delivered late or over budget. With large number
s of software
projects not meeting their expectations in terms of functionality, cost, or delivery schedule,
effective project management appears to be lacking.

Organizations may create a
Software Engineering Process Group

(SEPG), which is the focal
point for process improvement. Composed of line practitioners who have varied skills, the group
is at the center of the collaborative effort o
f everyone in the organization who is involved with
software engineering process improvement.

[
ed
it
]

Software development activities



The activities of the software development process repre
sented in the
waterfall model
. There are
several other models to represent this process.

[
edit
]

Planning

An important task in creating a software program is extracting the
requirements

or
requirements
analysis
.
[1]

Customers typically have an abstract idea of what they want as an end result
, but not
what
software

should do. Skilled and experienced software engineers recognize incomplete,
ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code
may help reduce the risk that the requirements are incorrect
.

Once the general requirements are gathered from the client, an analysis of the scope of the
development should be determined and clearly stated. This is often called a scope document.

Certain functionality may be out of scope of the project as a function

of cost or as a result of
unclear requirements at the start of development. If the development is done externally, this
document can be considered a legal document so that if there are ever disputes, any ambiguity of
what was promised to the client can be

clarified.

[
edit
]

Implementation, testing and documenting

Implementation

is the part of the process where
software engineers

actually
program

the code for
the project.

Software testing

is an integral and important phase of the software development pro
cess. This
part of the process ensures that
defects

are recognized as soon as possible.

Documenting

the internal design of software for the purpose of future maintenance and
enhancement is done throughout development. This may also include the writing of an
API
, be it
external or i
nternal. The software engineering process chosen by the developing team will
determine how much internal documentation (if any) is necessary.Plan
-
driven models (e.g.,
Waterfal
l
) generally produce more documentation than
Agile

models.

[
edit
]

Deployment and maintenance

Deployment

starts after the code is appropriately tested, approved fo
r
release
, and sold or
otherwise distributed into a production environment. This may involve installation,
customization (such as by setting parameters to the customer's va
lues), testing, and possibly an
extended period of evaluation.
[
citation needed
]

Software training and
support

is important, as software is only effective if it is used
correctly.
[
citation needed
]

Maintaining

and enhancing software to cope with newly discovered
faults

or requirements can
take substantial time
and effort, as missed requirements may force redesign of the software.
[
citation
needed
]

[
edit
]

Software development models

Several models exist to streamline the development process. Each one has its pros and cons, and
it's up to the development team to
adopt the most appropriate one for the project. Sometimes a
combination of the models may be more suitable.

[
edit
]

Waterfall model

Main article:
Waterfall model

The waterfall model shows a process, where
developers

are to follow these phases in order:

1.

Requirements specification

(
Requirements analysis
)

2.

Software design

3.

Implem
entation and Integration

4.

Testing

(or
Validation
)

5.

Deployment

(or
Installation
)

6.

Maintenance

In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may
occur before moving to the next phase which allows for the possibility of changes (which may
involve a formal

change control process). Reviews may also be employed to ensure that the
phase is indeed complete; the phase completion criteria are often referred to as a "gate" that the
project must pass through to move to the next phase. Waterfall discourages revisiti
ng and
revising any prior phase once it's complete. This "inflexibility" in a pure Waterfall model has
been a source of criticism by supporters of other more "flexible" models.

[
edit
]

Spiral model

Main article:
Spiral model

The key characteristic of a Spiral model is risk management at regular
stages in the development
cycle. In 1988,
Barry Boehm

published a formal software system development "spiral model,"
which combines some key aspect of the
waterfall model

and
rapid prototyping

methodologies,
but provided emphasis in a key area ma
ny felt had been neglected by other methodologies:
deliberate iterative risk analysis, particularly suited to large
-
scale complex systems.

The Spiral is visualized as a process passing through some number of iterations, with the four
quadrant diagram repre
sentative of the following activities:

1.

formulate plans to: identify software targets, selected to implement the program, clarify the
project development restrictions;

2.

Risk analysis: an analytical assessment of selected programs, to consider how to identify

and
eliminate risk;

3.

the implementation of the project: the implementation of software development and
verification;

Risk
-
driven spiral model, emphasizing the conditions of options and constraints in order to
support software reuse, software quality can he
lp as a special goal of integration into the product
development. However, the spiral model has some restrictive conditions, as follows:

1.

The spiral model emphasizes risk analysis, and thus requires customers to accept this analysis
and act on it. This requ
ires both trust in the
developer

as well as the willingness to spend more
to fix the issues, which is the reason why this model is often used for large
-
scale internal
s
oftware development.

2.

If the implementation of risk analysis will greatly affect the profits of the project, the spiral
model should not be used.

3.

Software developers

hav
e to actively look for possible risks, and analyze it accurately for the
spiral model to work.

The first stage is to formulate a plan to achieve the objectives with these constraints, and then
strive to find and remove all potential risks through careful a
nalysis and, if necessary, by
constructing a prototype. If some risks can not be ruled out, the customer has to decide whether
to terminate the project or to ignore the risks and continue anyway. Finally, the results are
evaluated and the design of the nex
t phase begins.

[
edit
]

Iterative and incremental development

Main article:
Iterative and incremental development

Iterative development
[2]

prescribes the construction of initially small but ever
-
larger portions of a
software project to help all those involved to uncover important issues early before problems or
faulty assumptions can lead to disaster.

[
edit
]

Agile development

Main article:
Agile software development

Agile software development uses iterative development as a basis but advocates a lighter and
more people
-
centric viewpoint than traditional approaches. Agile processes use feedback, rather
than planning, as their primary contro
l mechanism. The feedback is driven by regular tests and
releases of the evolving software.

There are many variations of agile processes:



In
Extreme Programming

(XP),

the phases are carried out in extremely small (or "continuous")
steps compared to the older, "batch" processes. The (intentionally incomplete) first pass
through the steps might take a day or a week, rather than the months or years of each complete
step i
n the Waterfall model. First, one writes automated tests, to provide concrete goals for
development. Next is coding (by a pair of programmers), which is complete when all the tests
pass, and the programmers can't think of any more tests that are needed. De
sign and
architecture emerge out of
refactoring
, and come after coding. The same people who do the
coding do design. (Only the last feature


merging design and code


is common to
a
ll

the
other agile processes.) The incomplete but functional system is deployed or demonstrated for
(some subset of) the users (at least one of which is on the development team). At this point, the
practitioners start again on writing tests for the next mo
st important part of the system.
[3]



Scrum



Dynamic systems development method

[
edit
]

Code and fix

"Code and fix" development is not so much a deliberate strategy as an artifact of naiveté and
schedule pressure on
software developers
.
[4]

Without much of a design in the way,
programmers

immediately begin producing
code
. At some point,
testing

begins (often late in the development
cycle), and the inevitable
bugs

must then be fixed before the product can be shipped. See also:
Continuous integration

and
Cowboy coding
.

[
edit
]

Proce
ss improvement models

Capability Maturity Model Integration

The
Capability Maturity Model Integration

(CMMI) is one of the leading

models and based on
best practice. Independent assessments grade organizations on how well they follow their
defined processes, not on the quality of those processes or the software produced. CMMI has
replaced
CMM
.

ISO 9000

ISO 9000

describes standards for a formally organized process to manufacture a product and the
methods of managing an
d monitoring progress. Although the standard was originally created for
the manufacturing sector, ISO 9000 standards have been applied to software development as
well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result,

only that formalized business processes have been followed.

ISO/IEC 15504

ISO/IEC 15504

Information technology


Process assessment

also known as Software Process
Improvement Ca
pability Determination (SPICE), is a "framework for the assessment of software
processes". This standard is aimed at setting out a clear model for process comparison. SPICE is
used much like CMMI. It models processes to manage, control, guide and monitor s
oftware
development. This model is then used to measure what a development organization or project
team actually does during software development. This information is analyzed to identify
weaknesses and drive improvement. It also identifies strengths that
can be continued or
integrated into common practice for that organization or team.

[
edit
]

Formal methods

Formal methods

are mathematical approaches to solving software (and hardware) problems at
the requirements, specification, and design levels. Formal methods are most likely to be appl
ied
to safety
-
critical or security
-
critical software and systems, such as
avionics software
. Software
safety assurance standards, such as
DO
-
178B
,
DO
-
178C
, and
Common Criteria

demand formal
methods at the highest

levels of categorization.

For sequential software, examples of formal methods include the
B
-
Method
, the specification
languages used in
Automated theorem proving
,
RAISE
,
VDM
, an
d the
Z notation
.

Formalization of software development is creeping in, in other places, with the application of
Object Constraint Language

(and specializations such as
Java Modeling Language
) and
especially with
Model
-
driven architecture

allowing execution of designs, if not specifications.

For concurrent software and systems,
Petri nets
,
Process Algebra
, and
finite state machines

(which are based on
automata theory

-

see also
virtual finite state machine

or
event driven finite
state machine
) allow executable software specification and can be used to build up and validate
application behavior.

Another
emerging trend in software development is to write a specification in some form of logic
(usually a variation of FOL), and then to directly execute the logic as though it were a program.
The
OWL

language, based on Description Logic, is an example. There is also work on mapping
some version of English (or another natural language) automatically to and from logic, and
executing the logic directly. Examples are
Attempto Controlled English
, and Internet Business
Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support
bidirecti
onal English
-
logic mapping and direct execution of the logic is that they can be made to
explain their results, in English, at the business or scientific level.

2.

Requirements analysis

Requirements analysis

in
systems engineering

and
software engineering
, encompasses those
tasks that go into determining the needs or conditions
to meet for a new or altered product, taking
account of the possibly conflicting
requirements

of the various
stakeholders
, such as
beneficiaries or users. It is an early stage in the more general activity of
requirements
engineering

whi
ch encompasses all activities concerned with eliciting, analyzing, documenting,
validating and managing software or system requirements.
[2]

Requirements analysis is critica
l to the success of a systems or software project.
[3]

The
requirements should be documented, actionable, measurable, testable, traceable, related to
identified business nee
ds or opportunities, and defined to a level of detail sufficient for system
design.

technical management:
[1]

Customer Requirements


Statements of fact and assumptions

that define the expectations of the system in terms of
mission objectives, environment, constraints, and measures of effectiveness and suitability
(MOE/MOS). The customers are those that perform the eight primary functions of systems
engineering, with spe
cial emphasis on the operator as the key customer. Operational
requirements will define the basic need and, at a minimum, answer the questions posed in the
following listing:
[1]



Operational distribution or deployment
: Where will the system be used?



Mission profile or scenario
: How will the system accomplish its mission objective?



Performance and related parameters
: What are the critical system parameters to
accomplish
the mission?



Utilization environments
: How are the various system components to be used?



Effectiveness requirements
: How effective or efficient must the system be in performing
its mission?



Operational life cycle
: How long will the system be in use by the
user?



Environment
: What environments will the system be expected to operate in an effective
manner?

Architectural Requirements

Architectural requirements explain what has to be done by identifying the necessary
system
architecture

of a
system
.

Structural Requirements

Structural requirements explain what has to be done by identifying the necessary
structure

of a
system
.

Behavioral Requirements

Behavioral requirements explain what has to be done by identifying the necessary
behavior

of a
system
.

Functional Requirements

Functional requirements

explain what has to be done by identifying the necessary task, action or
activity that must be accomplished. Functional requirements analysis will be used as the
toplevel functions for functional analysis.
[1]

Non
-
functional Requirements

Non
-
functional requirements

are requiremen
ts that specify criteria that can be used to judge
the operation of a system, rather than specific behaviors.

Performance Requirements

The extent to which a mission or function must be executed; generally measured in terms of
quantity, quality, coverage, t
imeliness or readiness. During requirements analysis, performance
(how well does it have to be done) requirements will be interactively developed across all
identified functions based on system life cycle factors; and characterized in terms of the degree
o
f certainty in their estimate, the degree of criticality to system success, and their relationship
to other requirements.
[1]

Design Requirements

The “build to,” “code

to,” and “buy to” requirements for products and “how to execute”
requirements for processes expressed in technical data packages and technical manuals.
[1]

Derived Re
quirements

Requirements that are implied or transformed from higher
-
level requirement. For example, a
requirement for long range or high speed may result in a design requirement for low weight.
[1]

Allocated Requirements

A requirement that is established by dividing or otherwise allocating a high
-
level requirement
into multiple lower
-
level requirements. Example: A 100
-
pound item that consists of two
subsystems might res
ult in weight requirements of 70 pounds and 30 pounds for the two lower
-
level items.
[1]

Well
-
known requirements categorization models include
FURPS

and FURPS+, developed at
Hewlett
-
Packard
.

[
edit
]

Requirements analysis issues

[
edit
]

Stakehold
er issues

Steve McConnell, in his book
Rapid Development
, details a number of ways users can inhibit
requirements gathering:



Users do not understand what they want or users don't have a clear idea of their requirements



Users will not commit to a set of wri
tten requirements



Users insist on new requirements after the cost and schedule have been fixed



Communication with users is slow



Users often do not participate in reviews or are incapable of doing so



Users are technically unsophisticated



Users do not unders
tand the development process



Users do not know about present technology

This may lead to the situation where user requirements keep changing even when system or
product development has been started.

[
edit
]

Engineer/developer issues

Possible problems caused by engineers and developers during requirements analysis are:



Technical personnel and end
-
users may have di
fferent vocabularies. Consequently, they may
wrongly believe they are in perfect agreement until the finished product is supplied.



Engineers and developers may try to make the requirements fit an existing system or model,
rather than develop a system speci
fic to the needs of the client.



Analysis may often be carried out by engineers or programmers, rather than personnel with the
people skills and the domain knowledge to understand a client's needs properly.

[
edit
]

Attempted solutions

One attempted solution to communications problems has been to employ specialists in business
or system analysis.

Techniques introduced in

the 1990s like
prototyping
,
Unified Modeling Language

(UML),
use
cases
, and
Agile software development

are also intended as solutions to problems

encountered
with previous methods.

Also, a new class of
application simulation

or application definition tools have entered the
market. These

tools are designed to bridge the communication gap between business users and
the IT organization


and also to allow applications to be 'test marketed' before any code is
produced. The best of these tools offer:



electronic whiteboards

to sketch application flows and test alternatives



ability to capture business logic and data needs



ability to generate high fidelity prototypes that closely imitate the final applic
ation



interactivity



capability to add contextual requirements and other comments



ability for remote and distributed users to run and interact with the simulation

3.

Analog signal processing

From Wikipedia, the free encyclopedia

Jump to:
navigation
,
search


Analog signal processing

is any
signal processing

conducted on
analog signals

by analog
means. "Analog" indicates something that is mathematically represented as a set of continuou
s
values. This differs from "digital" which uses a series of discrete quantities to represent signal.
Analog values are typically represented as a
voltage
,
electric current
, or
electric charge

around
components in the electronic devices. An error or noise affecting such physical
quantities will
result in a corresponding error in the signals represented by such physical quantities.

Examples of analog signal processing include crossover filters in loudspeakers, "bass", "treble"
and "volume" controls on stereos, and "tint" controls o
n TVs. Common analog processing
elements include capacitors, resistors, inductors and transistors.

Contents


[
hide
]




1

Tools used in analog signal processing


o

1.1

Convolution

o

1.2

Fourier transform

o

1.3

Laplace transform

o

1.4

Bod
e plots



2

Domains


o

2.1

Time domain

o

2.2

Frequency domain



3

Signals


o

3.1

Sinusoids

o

3.2

Impulse

o

3.3

Step



4

Systems


o

4.1

Linear time
-
invariant (LTI)

o

4.2

Co
mmon systems



5

See also


o

5.1

circuits

o

5.2

filters



6

References

[
edit
]

Tools used in analog signal processing

A system's behavior can be mathematically modeled and is represented in the time domain as
h(t) and in the
frequency domain

as H(s), where s is a
complex number

in the form of s=a+ib, or
s=a+jb in electrical engineering terms (electrical engineers u
se j because current is represented
by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually
called y(t) or Y(s).

[
edit
]

Convolution

Convolution

is the basic concept in signal processing that states an input signal can be combined
with the system's function to f
ind the output signal. It is the integral of the product of two
waveforms after one has reversed and shifted; the symbol for convolution is *.


That is the convolution integral and is used to find the convolution of a signal and a system;
typically a =
-
∞ and b = +∞.

Consider two waveforms f and g. By calculating the convolution, we determine how much a
reversed function g must be shifted along th
e x
-
axis to become identical to function f. The
convolution function essentially reverses and slides function g along the axis, and calculates the
integral of their (f and the reversed and shifted g) product for each possible amount of sliding.
When the fu
nctions match, the value of (f*g) is maximized. This occurs because when positive
areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral.

[
edit
]

Fourier transform

The
Fourier transform

is a function that transforms a signal or system in the time dom
ain into the
frequency domain, but it only works for certain functions. The constraint on which systems or
signals can be transformed by the Fourier Transform is that:


This is the Fourier transform integral:


Usually the Fourier transform integral isn
't used to determine the transform; instead, a table of
transform pairs is used to find the Fourier transform of a signal or system. The inverse Fourier
transform is used to go from frequency domain to time domain:


Each signal or system that can be transformed has a unique Fourier transform. There is only one
time signal for any frequency signal, and vice
-
versa.

[
edit
]

Laplace transform

For more details on this topic, see
Laplace transform
.

The
Laplace transform

is a generalized
Fourier tr
ansform
. It allows a transform of any system or
signal because it is a transform into the complex plane instead of just the jω line like the Fourier
transform. The major difference is that the Laplace transform has a region of convergence for
which the transform
is valid. This implies that a signal in frequency may have more than one
signal in time; the correct time signal for the transform is determined by the
region of
c
onvergence
. If the region of convergence includes the jω axis, jω can be substituted into the
Laplace transform for s and it's the same as the Fourier transform. The Laplace transform is:


and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the
complex plane, is:


[
edit
]

Bode plots

Bode plots

are plots of magnitude vs. frequency and phase vs. frequency for a system. The
magnitude axis is in
Decibel

(dB). The phase axis is in either degrees or radians. The fre
quency
axes are in a
logarithmic scale
. These are useful because for sinusoidal inputs, the output is the
input multiplied by the value of the magnitude plot at the frequ
ency and shifted by the value of
the phase plot at the frequency.

[
edit
]

Domains

[
edit
]

Time domain

This is the domain that most people are familiar with. A plot in the time domain shows the
amplitude of the signal with respect to time.

[
edit
]

Frequency domain

A plot in the
f
requency domain

shows either the phase shift or magnitude of a signal at each
frequency that it exists at. These can be found by taking the Fourier transform of a time signal
and are plotted similarly to a bode plot.

[
edit
]

Signals

While any signal can be used in analog signal processing, there are many types of signals that are
used very frequently.

[
edit
]

Sinusoids

Sinusoids

are the building block of analog signal processing
. All real world signals can be
represented as an infinite sum of sinusoidal functions via a
Fourier series
. A sinusoidal function
can be represented in terms of an exponential

by the application of
Euler's Formula
.

[
edit
]

Impulse

An impulse (
Dirac delta function
) is defined as a signal that has an infinite magnitude and an
infinitesimally narrow width with an area under it of

one, centered at zero. An impulse can be
represented as an infinite sum of sinusoids that includes all possible frequencies. It is not, in
reality, possible to generate such a signal, but it can be sufficiently approximated with a large
amplitude, narrow
pulse, to produce the theoretical impulse response in a network to a high
degree of accuracy. The symbol for an impulse is δ(t). If an impulse is used as an input to a
system, the output is known as the impulse response. The impulse response defines the sy
stem
because all possible frequencies are represented in the input.

[
edit
]

Step

A unit step function, also called the
Heaviside step function
, is a signal that has a magnitude of
zero before zero and a magnitude of one after zero. The symbol for a unit step is u(t). If a step is
used

as the input to a system, the output is called the step response. The step response shows how
a system responds to a sudden input, similar to turning on a switch. The period before the output
stabilizes is called the transient part of a signal. The step r
esponse can be multiplied with other
signals to show how the system responds when an input is suddenly turned on.

The unit step function is related to the Dirac delta function by;


[
edit
]

Systems

[
edit
]

Linear time
-
invariant (LTI)

Linearity means that if you have two inputs and two corresponding outputs, if you take a linear
combination of those two
inputs you will get a linear combination of the outputs. An example of
a linear system is a first order low
-
pass or high
-
pass filter. Linear systems are made out of analog
devices that demonstrate linear properties. These devices don't have to be entirely
linear, but
must have a region of operation that is linear. An operational amplifier is a non
-
linear device, but
has a region of operation that is linear, so it can be modeled as linear within that region of
operation. Time
-
invariance means it doesn't matt
er when you start a system, the same output will
result. For example, if you have a system and put an input into it today, you would get the same
output if you started the system tomorrow instead. There aren't any real systems that are LTI, but
many system
s can be modeled as LTI for simplicity in determining what their output will be. All
systems have some dependence on things like temperature, signal level or other factors that
cause them to be non
-
linear or non
-
time
-
invariant, but most are stable enough t
o model as LTI.
Linearity and time
-
invariance are important because they are the only types of systems that can
be easily solved using conventional analog signal processing methods. Once a system becomes
non
-
linear or non
-
time
-
invariant, it becomes a non
-
l
inear differential equations problem, and
there are very few of those that can actually be solved. (Haykin & Van Veen 2003)

[
edit
]

Common systems

Some common systems used in everyday life are filters, AM/FM radio, electric guitars and
musical instrument amplifiers. Filters are used in almost everything that has electronic circuitry.
Radio and television are good example
s of everyday uses of filters. When a channel is changed
on an analog television set or radio, an analog filter is used to pick out the carrier frequency on
the input signal. Once it's isolated, the television or radio information being broadcast is used t
o
form the picture and/or sound. Another common analog system is an electric guitar and its
amplifier. The guitar uses a magnetic core with a coil wrapped around it (inductor) to turn the
vibration of the strings into a small electric current. The current
is then filtered, amplified and
sent to a speaker in the amplifier. Most amplifiers are analog because they are easier and cheaper
to make than making a digital amplifier. There are also many analog guitar effects pedals,
although a large number of pedals
are now digital (they turn the input current into a digitized
value, perform an operation on it, then convert it back into an analog signal).

4.

Digital signal processing

(
DSP
) is the mathematical manipulation of an information signal
to modify or improve
it in some way. It is characterized by the representation of discrete time,
discrete frequency, or other discrete domain
signals

by a sequence of numbers or sym
bols and
the processing of these signals. Digital signal processing and
analog signal processing

are
subfields of
signal processing
. DSP includes subfields like:
audio

and
speech signal processing
,
sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal
processing,
digital image processing
, signal processing for communications, control of systems,
biomedical signal processing, seismic data processing, etc.

The goal of DSP is usually to measure, filter and/or compress continuous real
-
world an
alog
signals. The first step is usually to convert the signal from an analog to a digital form, by
sampling

and then digitizing it using an
analog
-
to
-
d
igital converter

(ADC), which turns the
analog signal into a stream of numbers. However, often, the required output signal is another
analog output signal, which requires a
digital
-
to
-
analog converter

(DAC). Even if this process is
more complex than analog processing and has a
discrete value range
, the application of
comp
utational power to digital signal processing allows for many advantages over analog
processing in many applications, such as
error detection and correction

in transmission as

well as
data compression
.
[1]

DSP
algorithms

have long been run on standard computers, on specialized processors called
digital signal processor

on purpose
-
built hardware

such as
application
-
specific integrated circuit

(ASICs). Today there are additional technologies used for digital signal proc
essing including
more powerful general purpose
microprocessors
,
fie
ld
-
programmable gate arrays

(FPGAs),
digital signal controllers

(mostly for industrial apps such as motor control), and
stream
processors
, among others.
[2]

Contents


[
hide
]




1

Signal sampling



2

DSP domains


o

2.1

Time and space domains

o

2.2

Frequency domain

o

2.3

Z
-
plane analysis

o

2.4

Wavelet



3

Appli
cations



4

Implementation



5

Techniques



6

Related fields



7

References



8

Further reading

[
edit
]

Signal sampling

Main article:
Sampling (signal processing)

With the increasing use of
computers

the usage of and need for digital signal processing has
increase
d. To use an analog signal on a computer, it must be digitized with an analog
-
to
-
digital
converter. Sampling is usually carried out in two stages,
discretization

and
quantization
. In the
discretization stage, the space of signals is partitioned into
equivalence classes

and quantization
is carried out by replacing the signal with representative signal of the corresponding equivalence
class. In the quantization stage the representative signal values are approximated by values fro
m
a finite set.

The
Nyquist

Shannon sampling theorem

states that a signal can be exactly reconstructed from its
samples if the
sampling frequency

is greater than twice the highest frequency of the signal; but
requires an infinite number of samples. In practice, the sampling frequency is often signif
icantly
more than twice that required by the signal's limited bandwidth.

[
edit
]

DSP domains

In DSP, engineers usually s
tudy digital signals in one of the following domains:
time domain

(one
-
dimensional signals), spatial domain (multidimensional signals),
frequency domain
, and
wavelet

domains. They choose the domain to process a signal in by making an informed guess
(or by trying different possibilities)
as to which domain best represents the essential
characteristics of the signal. A sequence of samples from a measuring device produces a time or
spatial domain representation, whereas a
discrete Fourier transform

produces the frequency
domain information, that is the
frequency spectrum
. Autocorrelation is defined as th
e
cross
-
correlation

of the signal with itself over varying intervals of time or space.

[
edit
]

Time and space domains

Main article:
Time domain

The most common processing approach in the time or space doma
in is enhancement of the input
signal through a method called filtering.
Digital filtering

generally consists of some linear
transformation of a number of surrounding samples a
round the current sample of the input or
output signal. There are various ways to characterize filters; for example:



A "linear" filter is a
linear transformation

of input samples; other filters are "non
-
linear". Linear
filters satisfy the superposition condition, i.e. if an input is a weighted linear combination of
different signals, the output is an equally weighted linear combination of the corresponding
output s
ignals.



A "causal" filter uses only previous samples of the input or output signals; while a "non
-
causal"
filter uses future input samples. A non
-
causal filter can usually be changed into a causal filter by
adding a delay to it.



A "time
-
invariant" filter h
as constant properties over time; other filters such as
adaptive filters

change in time.



A "stable" filter produces an output that converges to a constant value with time, or

remains
bounded within a finite interval. An "unstable" filter can produce an output that grows without
bounds, with bounded or even zero input.



A "finite impulse response" (
FIR
) filter uses only the input signals, while an "infinite impulse
response" filter (
IIR
) uses both the input signal and previous sample
s of the output signal. FIR
filters are always stable, while IIR filters may be unstable.

Filters can be represented by block diagrams, which can then be used to derive a sample
processing
algorithm

to implement the filter with hardware instructions. A filter may also be
described as a
difference equation
, a collection of
zeroes

and
poles

or, if it is an FIR filter, an
impulse response

or
step response
.

The output of a linear digital filter to any given input may be calculated by
convolving

the input
signal with the
impulse response
.

[
edit
]

Frequency domain

Main article:
Frequency domain

Signals are converted from time

or space domain to the frequency domain usually through the
Fourier transform
. The Fourier transform converts the signal information to a magnitude and
phase component o
f each frequency. Often the Fourier transform is converted to the power
spectrum, which is the magnitude of each frequency component squared.

The most common purpose for analysis of signals in the frequency domain is analysis of signal
properties. The engi
neer can study the spectrum to determine which frequencies are present in
the input signal and which are missing.

In addition to frequency information, phase information is often needed. This can be obtained
from the Fourier transform. With some applicatio
ns, how the phase varies with frequency can be
a significant consideration.

Filtering, particularly in non
-
realtime work can also be achieved by converting to the frequency
domain, applying the filter and then converting back to the time domain. This is a
fast, O(n log
n) operation, and can give essentially any filter shape including excellent approximations to
brickwall filters
.

There are some commonly used frequency domain

transformations. For example, the
cepstrum

converts a signal to the frequency domain through Fourier transform, takes the logarithm, then
applies another Fourier transform. This emphasize
s the frequency components with smaller
magnitude while retaining the order of magnitudes of frequency components.

Frequency domain analysis is also called
spectrum
-

or
spectral analysis
.

[
edit
]

Z
-
plane analysis

Main article:
Z
-
transform

Whereas analog filters are usually analysed in terms of
transfer functions

in the
s plane

using
Laplace transforms
, digital filters are analysed in the z plane in terms of Z
-
transforms. A digital
filter may be described in the z plane by its characteristic collection of
zeroes

and
poles
. The z
plane provides a means for mapping digital frequency (samples/second) to real and imagina
ry z
components, where
for continuous periodic signals and
(

is the digital
frequency). This is useful for providing a vis
ualization of the frequency response of a digital
system or signal.

[
edit
]

Wavelet

Main article:
Discrete wavelet transform



An example of the 2D discrete wavelet transform that is used in
JPEG2000
. The original image is high
-
pass filtered, yielding the three large images, each describing local changes in brightness (details) in the
original image. It is then low
-
pass filtered and downscaled, yielding an approximation image;
this image
is high
-
pass filtered to produce the three smaller detail images, and low
-
pass filtered to produce the
final approximation image in the upper
-
left.

In
numeric
al analysis

and
functional analysis
, a
discrete wavelet transform

(DWT) is any
w
avelet transform

for which the
wavelets

are discretely sampled. As with other wavelet
transforms, a key advantage it has over
Fourier transforms

is temporal resolution: it captures
both frequency
and

location information (location in time).

[
edit
]

Applications

The main applications of DSP are
audio signal processing
,
audio compression
,
digital image
processing
,
video compression
,
speech processing
,
speech recognition
,
digital communications
,
RADAR
,
SONAR
,
seismology

and
biomedicine
. Specific examples are
speech compression

and
transmission in digital
mobile phones
,
room

correction

of sound in
hi
-
fi

and
sound reinforcement

applications,
weather forecasting
,
economic forecasting
,
seismic

data processing, analysis and
control of
industrial processes
,
medical im
aging

such as
CAT

scans and
MRI
,
MP3

compression,
computer graphics
,
image manipulation
, hi
-
fi
loudspeaker

crossovers

and
equalization
, and
audio
effects

for use with
electric guitar

amplifiers
.

[
edit
]

Implementation

Depending on the requirements of the applicati
on, digital signal processing tasks can be
implemented on
general purpose computers

(e.g.
supercomputers
,
mainframe computers
, or
personal computers
) or with
embedded processors

that may or may not include specialized
microprocessors

called
digital signal processors
.

Often when the processing requirement is not
re
al
-
time
, processing is economically done with an
existing general
-
purpose computer and the signal data (either input or output) exists in data files.
This is essentially no different than any other data processing, except DSP mathematical
techniques (such

as the
FFT
) are used, and the sampled data is usually assumed to be uniformly
sampled in time or space. For example: processing
digital photographs

with software such as
Photoshop
.

However, when the application requirement is
real
-
time
, DSP is often implemented using
specialised microprocessors

such as the
DSP56000
, the
TMS320
, or the
SHARC
. These often
process data using
fixed
-
point arithmetic
, though some more powerful versions us
e
floating point
arithmetic
. For faster applications
FPGAs
[3]

might be used. Beginning in 2007, multicore
implementations of DSPs have started to emerge from companies including
Freescale

and
Stream Processors, Inc
. For faster applications with vast usage,
ASICs

might be designed
specifically. For slow applications, a traditional slower processor such as a microcontroller may
be adequate. Also a growing number of DSP applications are now being implemented on
Embedded Systems

using powerful PCs with a
Multi
-
core processor
.