LECTURE:1 Unit-1: Basics of Multimedia Technology: 1.Computers, communication and entertainment

electricfutureΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

347 εμφανίσεις

LECTURE:1


Unit
-
1: Basics of Multimedia Technology:

1.

Computers, communication and entertainment


In the era of information technology, we are dealing with free flow of information

with no barriers of distance. Take
the case of internet as one of the
simplest

examples. You can view and download information across the globe within
a

reasonable time, if you have a good speed of connection.


Let us spend a bit of time to think in which form we access the information. The

simplest and the most common of
th
ese is the printed text. In every web page,

some text materials are always present, although the volume of textual
contents

may vary from page to page. The text materials are supported with graphics, still

pictures, animations, video
clippings, audio comme
ntaries and so on. All or at

least more than one of these media, which we can collectively call
as

“multimedia”, are inevitably present to convey the information which the web site

developers want to do, for the
benefit of the world community at large. All

these

media are therefore utilized to present the information in a
meaningful way, in an

attractive style.


Internet is not the only kind of information dissemination involving multiple media.

Let us have a look at some other
examples as well. In televisi
on, we have

involvement of two media


audio and video, which should be presented
together

in a synchronized manner. If we present the audio ahead of video or video ahead

of audio in time, the results
are far from being pleasant. Loss of lip

synchronizatio
n is noticeable, even if the audio and the video presentations
differ

by just 150 milliseconds or more. If the time lead or the lag is in order of seconds,

one may totally lose the
purpose of the presentation. Say, in some distance

learning program, the te
acher is explaining something which is
written down on a

blackboard. If the audio and the video differ in time significantly, a student will not

be able to
follow the lecture at all.


So, television is also a multimedia and now, we understand one more

req
uirement of multimedia signals. The
multimedia signals must be synchronized

and if it is not possible to make them absolutely synchronized, they should
at

least follow a stringent specification by which lack of synchronization can be

tolerated.


Television

is an example, where there is only unidirectional flow of multimedia

information


from the transmitter to
the receiver. In standard broadcast

television, there is no flow of information in the reverse direction, unless you use

a different device and a ch
annel

say, by talking to the television presenter over

telephone. In internet of course, you
have interactivity in the sense that you can

navigate around the information and make selections through hyperlinks,
but bulk

of the flow of information is happen
ing from the web server to the users.

In some applications, we require
free flow of multimedia signals between two or

more nodes, as is the case with video conferencing, or what we
should more

appropriately call a multimedia teleconferencing. In case of
multimedia

conferencing, the nodes
physically located in different parts of a globe will be

equipped with microphones, cameras, a computer supporting
texts, graphics and

animations and may be other supporting devices if required. For example,suppose five e
minent
doctors across the continents are having a live medical

conference to discuss about a patient’s condition. The doctors
should not only

see and talk to each other, all of them should observe the patient at the same

time, have access to the
patient’s
medical history, live readings and graphs of the

monitoring instruments, visual rendering of the data etc. In a
multimedia

teleconferencing application of this nature, one must ensure that the end
-
to
-
end

delays and the turnaround
time is minimum. Moreover,

the end
-
to
-
end delays

between different nodes should not be different very significantly
with respect to

each other, that is, the delay jitters must be small








Multimedia an introduction

What is Multimedia?



A simple definition of multimedia is
‘multimedia can be any combination of text, graphics, sound, animation
and video, to effectively communicate ideas to users’




Multimedia = Multi + media

Multi = many

Media = medium or means by which information is stored,
transmitted, presented or perc
eived.



Other definition of Multimedia area

Definition 1

“Multimedia is any combination of text, graphic art, sound, animation and video delivered to you by computer or
other electronic means.”

Definition 2

“Multimedia is the presentation of a (usually i
nteractive) computer application, incorporating media
elements such as text, graphics, video, animation and sound on computer.”











Types of Multimedia Presentation



Multimedia presentation can be categorize into 2
; linear Multimedia

and
Interactive
Multimedia.

a.

Linear Multimedia

o

In linear Multimedia the users have very little control over the presentation. The users would only sit back and watch the
presentation The presentation normally plays from the start to end or even loops continually to presen
t the information.



A movie is a common type of linear multimedia


b.

Interactive Multimedia

o

In interactive Multimedia, users dictate the flow of delivery.
The users control the delivery of elements
and control the
what

and
when.

o

Users have the ability to
move around or follow different path through the information presentation.

o

The advantage of interactive Multimedia is that complex domain of information can easily be
presented. Whereas the disadvantage is, users might get lost in the massive “information
highway”.



Interactive Multimedia is useful for: information archive (encyclopedia), education,
training and entertainment.



Multimedia System Characteristic




4 major characteristics of Multimedia System are


a.

Multimedia systems must be
computer
controlled
.

b.

All multimedia components are
integrated
.

c.

The interface to the final user may permit
interactivity
.

d.

The information must be represented
digitally
.


I)

Computer controlled



Computer is used for

o

Producing

the content of the information


e.g. by using the authoring tools, image editor,
sound and video editor

o

Storing

the information


providing large and shared capacity for multimedia information.

o

Transmitting

the information


through the network.

o

Presenti
ng

the information to the end user


make direct use of computer peripheral such
as display device (monitor) or sound generator (speaker).


II)

Integrated



All multimedia components (audio, video, text, graphics) used in the system must be somehow
integrated.

o

Example:



Every device, such as microphone and camera is connected to and controlled by a
single computer.



A single type of digital storage is used for all media type.



Video sequences are shown on computer screen instead of TV monitor.

III)

I
nteractivity




Three levels of interactivity:

Level 1:

Interactivity strictly on information delivery. Users


select the time at which the presentation starts, the


order, the speed and the form of the presentation itself.

Level 2:

Users can modify or enrich the c
ontent of the


information, and this modification is recorded.

Level 3:

Actual processing of users input and the computer


generate genuine result based on the users input.


IV)

Digitally represented



Digitization : process involved in transforming an analog

signal to digital signal.














LECTURE
2
:


Framework for multimedia systems


For multimedia communication, we have to make judicious use of all the media at

our disposal. We have audio,
video, graphics, texts
-

all these media as the

sources, but

first and foremost, we need a system which can acquire the

separate media streams, process them together to make it an integrated

multimedia stream.

Now, click at the link
given below to view the elements involved in a multimedia

transmitter, as shown in
Fig.1.1


Devices like cameras, microphones, keyboards, mouse, touch
-
screens, storage

medium etc. are required to feed inputs
from different sources. All further

processing till the transmission is done by the computer. The data acquisition

from
multiple m
edia is followed by data compression to eliminate inherent

redundancies present in the media streams. This
is followed by inter
-
media

synchronization by insertion of time
-
stamps, integration of individual media

streams and
finally the transmission of integ
rated multimedia stream through a

communication channel, which can be a wired or a
wireless medium.

The destination end should have a corresponding interface to receive the

integrated multimedia
stream through the communication channel. At the

receiver, a
reversal of the processes involved during transmission
is required.

Now, click at the link below to view the elements involved in a multimedia

receiver, as shown in Fig.1.2.



The media extractor separates the integrated media stream into individual media

streams, which undergoes de
-
compression and then presented in a

synchronized manner according to their time
-
stamps in different playback units,

such as monitors, loudspeakers, printers/plotters, recording devices etc.


The subject of multimedia is studied

from different perspectives in different

universities and institutes. In some of the
multimedia courses, the emphasis is on

how to generate multimedia production, the authoring tools and the software

associated with these etc. In this course, we don’t cov
er any of the multimedia

production aspects at all. Rather, this
course focuses on the multimedia systems

and the technology associated with multimedia signal processing and

communication. We have already posed the technical challenges. In the coming

lesso
ns, we shall see in details how
these challenges, such as compression, synchronization etc can be overcome, how the multimedia standards have
been

designed to ensure effective multimedia communication, how to integrate the

associated media and how to
index

and retrieve multimedia sequences. Other

than the introduction, this course has been divided into the following
modules:

(i) Basics of Image Compression and Coding.

(ii) Orthogonal Transforms for Image Compression.

(iii) Temporal Redundancies in Video
sequences.

(iv) Real
-
time Video Coding.

(v) Multimedia standards.

(vi) Continuity and synchronization.

(vii) Audio Coding.

(viii) Indexing, Classification and Retrieval.

(ix) Multimedia applications.



MULTIMEDIA DEVICE:


1


CONNECTIONS:


Among many
devices
-

computer, monitor, disk drive, video disk driver etc there are many wires and connections to
resemble the intensive care

ward of hospital.

The equipment required for multimedia project depend on the content of the project as well as its design. If

you can
find content such as sound effects, music, graphic arts, quick time or AVI movies to use in your project, you may not
need extra tools for making your own.

Multimedia developer have separate equipments for digitizing sound from
microphone or taps
, scanning photos from other printer matters.

Connection

Transfer rates

Serial port

115kbits/s

Standard parallel port

115 kbits/s

Original USB

12 mbitd/s

IDE

3.3
-
16.7Mbits/s

SCSI
-
1

5Mbits/s

SCSI
-
2

10Mbits/s

Ultra SCSI

20Mbits/s

Ultra 2 SCSI

40Mbits/s

Wide Ultra 2 SCSI

40Mbits//s

Ultra 3 SCSI

80Mbits/s

Wide ultra 3 SCSI

160Mbits/s





SCSI


Small Computer System Interface
, or
SCSI

(pronounced
['sk
ʌ
zi]
[1]
), is a set of standards for physically
connecting and transferring data between computers and
peripheral devices
. The SCSI standards define
commands
,
protocols, and electrical and optical
interfaces
. SCSI is most commonly used for hard disks and tape drives, but it can
connect a wide range of other devices, including scanners and
CD

drives
. The SCSI standard defines command sets
for specific
peripheral device types
; the p
resence of "unknown" as one of these types means that in theory it can be
used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial
requirements.



SCSI is an intelligent interface: it hides the complexit
y of physical format. Every device attaches to the SCSI
bus in a similar manner.



SCSI is a peripheral interface: up to 8 or 16 devices can be attached to a single bus. There can be any number
of hosts and peripheral devices but there should be at least one host.



SCSI is a buffered interface: it uses hand shake signals between devices,

SCSI
-
1, SCSI
-
2 have the option of
parity error checking. Starting with SCSI
-
U160 (part of SCSI
-
3) all commands and data are error checked by
a
CRC32

checksum
.



SCSI is a peer to peer interface: the SCSI protocol defines communication from host to host, host to a
peripheral device, peripheral device to a peripheral device. However most peripheral devices are exclusively
SCSI targets
, incapable of acting as
SCSI initiators

unable to initiate SCSI transactions themselves.
Therefore peripheral
-
to
-
peripheral commun
ications are uncommon, but possible in most SCSI applications.
The
Symbios Logic

53C810 chip is an example of a
PCI

host interface

that can act as a SCSI target.


LECTURE 3:



IDE, ATA, EIDE, ultra ATA, Ultra IDE


The
ATA

(
Advanced Technology Attachment
) standard is a standard interface that allows you to connect storage
peripherals to PC computers. The ATA standard was developed on May 12, 1994 by the ANSI (document X3.221
-
1994).

Despite the official name "ATA", this standard is better known by the com
mercial term
IDE

(
Integrated Drive
Electronics
) or
Enhanced IDE

(
EIDE

or
E
-
IDE
).

The ATA standard was originally intended for connecting
hard drives
, however an extension called
ATAPI

(
ATA
Pack
et Interface
) was developed in order to be able to interface other storage peripherals (
CD
-
ROM drives
,
DVD
-
ROM drives
, etc.) on an ATA

interface.

Since the
Serial ATA

standard (written
S
-
ATA

or
SATA
) has emerged, which allows you to transfer data over a serial
link, the term "
Parallel ATA
" (written
PATA

or
P
-
ATA
) sometimes replaces the term "ATA" in order to differentiate
between the two standards.

The Principle

The ATA standard allows you to connect storage peripherals directly with the motherboard thanks to a
ribbon cable
,
which is generally made up of 40 para
llel wires and three connectors (usually a blue connector for the motherboard
and a black connector and a grey connector for the two storage peripherals).



On the cable, one of the peripherals must be declared the
master

cable and the other the
slave
. It is understood that
the far connector (black) is reserved for the master peripheral and the middle connector (grey) for the slave
peripheral. A mode called
cable select

(abbreviated as
CS

or
C/S
) allows you to automatically define the master and
slave
peripherals as long as the computer's
BIOS

supports this functionality.


USB:

Universal Serial Bus (USB) connects more than computers and peripherals. It has the power to connect you with a
whole

new world of PC experiences.

USB is your instant connection to the fun of digital photography or the limitless
creative possibilities of digital imaging. You can use USB to connect with other people through the power of PC
-
telephony and video conferencing
. Once you've tried USB, we think you'll grow quite attached to it!

In
information
technology
,
Universal Serial Bus

(
USB
) is a
serial

bus

standard to
interface

devices to a host computer. USB was
designed to allow many peripherals to be connected using a single standardized interface socket and to improve the
Plug and play

capabilities by allowing
hot swapping
, that is, by allowing devices to be connected and disconnected
without
rebooting

the computer or turning off the device. Other convenient features include providing power to low
-
consumption devices without the need for an external power supply and allowing many devices to be used without
requiring manufactu
rer specific, individual
device drivers

to be installed.

USB is intended to replace many legacy varieties of
serial

and
parallel ports
. USB can connect
computer peripherals

such as
mice
,
keyboards
,
PDAs
,
gamepads

and
joysticks
,
scanners
,
digital cameras
,
printers
, personal media players,
and
flash drives
. For many of those devices USB has become the standard connection method. USB was originally
designed for
personal computers
, but it has become commonplace on other devices such as PDAs and
video game
consoles
, and as a bridging
power cord

between a device and an
AC adapter

plugged into a
wall plug

for charging
purposes. As of 2008
, there are about 2 billion USB devices in the world.
[
citation needed
]

The design of USB is standardized by the
USB Implementers Forum

(USB
-
IF), an industry standards body
incorporating leading companies from the computer and electronics industries. Notable members have included
Agere

(now merged with
LSI Corporation
),
Apple Inc.
,
Hewlett
-
Packard
,
Intel
,
NEC
, and
Microsoft
.

Firewire


Presentation of FireWire Bus (IEEE 1394)

The
IEEE 1394

bus (name of the standard to which it makes reference) was developed at the end of 1995 in order to provide an
interconnection system that allows data to circulate at a high speed and in real time. The company
Apple

gave it the commercial
name "
FireWire
",

which is how it is most commonly known. Sony also gave it commercial name,
i.Link
.
Texas Instruments

preferred to call it
Lynx
.

FireWire is a port that exists on some computers that allows you to connect peripherals (particularly digital cameras) at a v
e
ry
high bandwidth. There are expansion boards (generally in
PCI

or
PC Card / PCMCIA

format) that allow you to equip a computer
with FireWire connectors. FireWire connectors and cables can be easily spotted thanks to their shape as well as the following

logo:



FireWire Connectors

There are different FireWire connectors for each of the IEEE 1394 standards.



The IEEE 1394a standard specifies two connectors:

o

Connectors 1394a
-
1995
:


o

Connectors 1394a
-
2000
, called
mini
-
DV

because they are used on Digital Video (DV) cameras:




The IEEE 1394b standard specifies two types of connectors that are designed so that 1
394b
-
Beta cables can be plugged into
Beta and Bilingual connectors, but 1394b Bilingual cables can only be plugged into Bilingual connectors:

o

1394b Beta connectors
:


o

1394b Bilingual connectors
:


How the FireWire Bus Works

The IEEE 1394 bus has about t
he same structure as the
USB bus

except that it is a cable made up of six wires (2 pairs for the
data and the clock and 2 wires for the power supply) that allow it to reach a bandwidth of 800 Mb/s
(soon it should be able to
reach 1.6 Gb/s, or even 3.2 Gb/s down the road). The two wires for the clock is the major difference between the USB bus and
the IEEE 1394 bus, i.e. the possibility to operate in two transfer modes:



Asynchronous transfer mode
: t
his mode is based on a transmission of packets at variable time intervals. This means that the
host sends a data packet and waits to receive a receipt notification from the peripheral. If the host receives a receipt noti
fication,
it sends the next data pac
ket. Otherwise, the first packet is resent after a certain period of time.



Synchronous mode
: this mode allows data packets of specific sizes to be sent in regular intervals. A node called
Cycle Master

is in charge of sending a synchronisation packet (called a
Cycle Start packet
) every 125 microseconds. This way, no receipt
notification is necessary, which guarantees a set bandwidth. Moreover, given that no receipt notification is necessary, the
method
of addressing a peripheral is simplified and the saved bandwidth allows you to gain throughput.

Another innovation of the IEEE 1394 standard: bridges (systems that allow you to link buses to other buses) can be used.
Peripheral addresses are set with a node (i.e. peripheral) identifier encoded on 16 bits. This identifier is divided into two

fields: a

10
-
bit field that identifies the bridge and a 6
-
bit field that specifies the node. Therefore, it is possible to connect 1,023 bridges (or
2
10

-
1) on which there can be 63 nodes (or 2
6

-
1), which means it is possible to address 65,535 peripherals! The IEEE

1394
standard allows hot swapping. While the USB bus is intended for peripherals that do not require a lot of resources (e.g. a mo
use
or a keyboard), the IEEE 1394 bandwidth is larger and is intended to be used for new, unknown multimedia (video acquisiti
on,
etc.).


LECTURE 4:


MEMORY STORAGE DEVICE:



To estimate the memory requirement of a multimedia project is the space required on any floppy or a hard
disk or CD ROM not on the random access memory. While your computer is running you must h
ave sense of the
project content, color image, text

and programming code that glue it all together to the required memory.

If you are making a multimedia you must also need memory for storing and archiving working files used during
production, audio and vi
deo files.


<1>
RAM

Random
-
access memory

(usually known by its
acronym
,
RAM
) is a form of
computer data storage
. Today it takes
the form of
integrated circuits

that allow the stored
data

to be acces
sed in any order (i.e., at
random
). The word
random

thus refers to the fact that any piece of data can be returned in a
constant time
, regardless of its physical
location and whether or not it is related to the previous piece of data.
[1]

ROM:

Read
-
only memory

(usually known by
its acronym,
ROM
) is a class of
storage

media used in
computers

and other
electronic devices. Because da
ta stored in ROM cannot be modified (at least not very quickly or easily), it is mainly
used to distribute
firmware

In its strictest sense, ROM refers only to
mask ROM

(the oldest type of
solid state

ROM), which is
fabricated

with the desired data permanently stored in it, and thus can never be modified. However,
more modern types such as
EPROM

and
flash EEPROM

can be erased and re
-
programmed multiple times; they are
still described as "read
-
only memory"(ROM) because the reprogramming

process is generally infrequent,
comparatively slow, and often does not permit
random access

writes to individual memory locations. Despite the
simplicity of mask ROM,
economies of scale

and
field
-
programmability

often make reprogrammable technologies

more flexible and inexpensive, so that mask ROM is rarely used in new products as of 2007
.

Zip, Jaz, Syquest, optical storage device:

Fro years the Sequest 44 MB removable cartridges were the most widely used portable medium among multimedia
developers. Zip driver with there likewise inexpensive 100MB, 250MB, 750 MB cartridge built on a
floppy disk
technology, significantly presented Syquest’s Market share fro removable media. Lomega’s Jaz cartridge , built
based on hard drive technology provide one or two gegebytes of removable storage media, and have fast enough
transfer rate for mult
imedia developer.

Other storage device are:



Digital versatile disc



Flash or thumb Drive



CD
-
ROM Players



CD Recorder



CD RW

INPUT DEVICE:



Keyboard



Mice



Trackball



Touchscreen



Magnetic card encoders and reader



Graphic tablets



Scanners



Optical Character
Recognizer Device



infrared Remotes



Voice
Recognizer systems



Digital Cameras

OUTPUT HARDWARE:



Presentation of audio and visual component of your multimedia project requires hardware that may not be
attached with the computer by itself, such as speaker, am
plifier, monitor. There is no greater test of benefit of
good output hardware then to feed to feed your audio output of your computer in your external amplifier system.
Some of your output device are:




Audio Device



Amplifier and Speaker



Portable Media Pla
yer



Monitor



Video Device



Projectors



Printers


Communication Devidce:


Communication among

workgroup member ands with the client is essential for the efficient and assured
completion of project. When you need it immediate, an internet connection is requir
ed. If you and your client
both are connected via internet
, a combination of communication by e
-
mail and FTP(file transfer protocol )
may be the most cost effective and efficient solution for creative developer and manager Various
communication device are
as below:



Modem





ISDN and DSL



Cable Modems





LECTURE 5:

CD
-
AUDIO, CD ROM, CD
-
I:


CD Audio (or CD
-
DA
-
Digital Audio):

The first widely used CDs were music CDs that appeared in the early 1980s.The format for storing
recorded music in digital form, as on CDs that are commonly found in music stores. The red book
standards, created by Sony and Phillips indicate specifications

such as the size of the pits and lands, how
the audio is organized and where it is located on the CD, as well as how errors are corrected.Using
compression techniques CD Audio discs can hold up to 75 minutes of sound. To provide the highest
quality, the m
usic is sampled at 44.1 kHz, 16
-
bit stereo. Because of the high
-
quality sound of audio CDs,
they quickly became very popular. Other CD formats evolved from the Red Books standards.



CDROM:


Although the Red Book standards were excellent fo
r audio, they were useless for data, text, graphics and
video.

he Yellow Book standards built upon the Red Book, adding specifications for a track to accommodate data,
thus establishing a format for storing data, including video and audio, in digital form
on a compact disc.CD
-
ROM
also provided a better error
-
checking scheme, which is important for data.One drawback of the Yellow Book
standards is that they allowed various manufacturers to determines their own method of organizing and accessing
data. This le
d to incompatibilities across computer platforms.

For the first few years of its existence, the Compact
Disc was a medium used purely for audio. However, in 1985 the
Yellow Book

CD
-
ROM standard was established by
Sony

and
Philips
, which defined a non
-
volatile optical data
computer data storage

medium using the same physical
format as audio compact discs, readable by a computer with a CD
-
ROM drive

CD
-
I (Interactive):



CD
-
i

or
Compac
t Disc Interactive

is the name of an interactive multimedia CD player developed and marketed
by
Royal Philips Electronics N.V.

CD
-
i also refers to the multimedia
Compact Disc

standard utilized by the CD
-
i
console, also known as
Green Book
, which
was co
-
developed by
Philips

and
Sony

in 1986 (not to be confused with
MMCD, the pre
-
DVD

format also co
-
developed by Philips and Sony). The first Philips CD
-
i player, released in 1991
and initially priced around
USD

$700, is capable of playing interactive CD
-
i discs,
Audio CDs
,
CD+G

(CD+Graphics), Karaoke CDs, and
Video CDs

(VCDs), though the latter requires an optional "Digital Video Card" to
provide
MPEG
-
1

decoding.

D
veloped by Phillips in 1986, the specifications for CD
-
I, were published in
the Green
Book. CD
-
I is a platform
-
specific format; it requires a CD
-
I player, with a proprietary operating system, attached to a
television set. Because of the need for specific CD
-
I hardware, this format has had only marginal success in the
consumer mark
et. One of the benefits of CD
-
I is its ability to synchronize sound and pictures on a single track of the
disc.

PRESENTATION DEVICE AND USER INTERFACE IN MULTIMEDIA


LECTURE 6
:

2.

Lans and multimedia internet, World Wide Web & multimedia distribution
network
-
ATM & ADSL





Networks


Telephone networks dedicate a set of resources that forms a complete path from end to end for the duration of the
telephone connection. The dedicated path guarantees that the voice data can be delivered from one end to the other
e
nd in a smooth and timely way, but the resources remain dedicated even when there is no talking. In contrast, digital
packet networks, for communication between computers, use time
-
shared resources (links, switches, and routers) to
send packets through the

network. The use of shared resources allows computer networks to be used at high
utilization, because even small periods of inactivity can be filled with data from a different user. The high utilization
and shared resources create a problem with respect t
o the timely delivery of video and audio over data networks.
Current research centers around reserving resources for time
-
sensitive data, which will make digital data networks
more like telephone voice networks.

Internet


The Internet and intranets, which use the TCP protocol suite, are the most important delivery vehicles for multimedia
objects. TCP provides communication sessions between applications on hosts, sending streams of bytes for which
delivery is always guarante
ed by means of acknowledgments and retransmission. User Datagram Protocol (UDP) is a
``best
-
effort'' delivery protocol (some messages may be lost) that sends individual messages between hosts. Internet
technology is used on single LANs and on connected LA
Ns within an organization, which are sometimes called
intranets, and on ``backbones'' that link different organizations into one single global network. Internet technology
allows LANs and backbones of totally different technologies to be joined together in
to a single, seamless network.

Part of this is achieved through communications processors called routers. Routers can be accessed from two or more
networks, passing data back and forth as needed. The routers communicate information on the current network
topology among themselves in order to build routing tables within each router. These tables are consulted each time a
message arrives, in order to send it to the next appropriate router, eventually resulting in delivery.

Token ring


Token ring

[31]

is a hardware architecture for passing packets between stations on a LAN. Since a single circular
communication path is used for all messages, there must be a way to dec
ide which station is allowed to send at any
time. In token ring, a ``token,'' which gives a station the right to transmit data, is passed from station to station. The
data rate of a token ring network is 16 Mb/s.

Ethernet

Ethernet

[31]

LANs use a common wire to transmit data from station to station. Mediation between transmitting
stations is done by having stations listen before sending, so that they will not interfere wit
h each other. However, two
stations could begin to send at the same time and collide, or one station could start to send significantly later than
another but not know it because of propagation delay. In order to detect these other situations, stations cont
inue to
listen while they transmit and determine whether their message was possibly garbled by a collision. If there is a
collision, a retransmission takes place (by both stations) a short but random time later. Ethernet LANs can transmit
data at 10 Mb/s.
However, when multiple stations are competing for the LAN, the throughput may be much lower
because of collisions and retransmissions.

Switched Ethernet


Switches may be used at a hub to create many small LANs where one large one existed before. This redu
ces
contention and permits higher throughput. In addition, Ethernet is being extended to 100Mb/s throughput. The
combination,
switched Ethernet
, is much more appropriate to multimedia than regular Ethernet, because existing
Ethernet LANs can support only a
bout six MPEG video streams, even when nothing else is being sent over the LAN.

ATM

Asynchronous Transfer Mode
(ATM)
[29
,
32]

is a new packet
-
network protocol designed for mixing voice, video, and
data within the same network. Voice is digitized in telephone networks at 64 Kb/s (kilobits per second), which must
be delivered with minimal dela
y, so very small packet sizes are used. On the other hand, video data and other
business data usually benefit from quite large block sizes. An ATM packet consists of 48 octets (the term used in
communications for eight bits, called a byte in data processin
g) of data preceded by five octets of control information.
An ATM network consists of a set of communication links interconnected

by switches. Communication is preceded
by a setup stage in which a path through the network is determined to establish a circu
it. Once a circuit is established,
53
-
octet packets may be streamed from point to point.

ATM networks can be used to implement parts of the Internet by simulating links between routers in separate
intranets. This means that the ``direct'' intranet connect
ions are actually implemented by means of shared ATM links
and switches.

ATM, both between LANs and between servers and workstations on a LAN, will support data rates that will allow
many users to make use of motion video on a LAN.





Data
-
transmission t
echniques


Modems


Modulator/demodulators, or modems, are used to send digital data over analog channels by means of a carrier signal
(sine wave) modulated by changing the frequency, phase, amplitude, or some combination of them in order to
represent digit
al data. (The result is still an analog signal.) Modulation is performed at the transmitting end and
demodulation at the receiving end. The most common use for modems in a computer environment is to connect two
computers over an analog telephone line. Beca
use of the quality of telephone lines, the data rate is commonly limited
to 28.8 Kb/s. For transmission of customer analog signals between telephone company central offices, the signals are
sampled and converted to ``digital form'' (actually, still an anal
og signal) for transmission between offices. Since the
customer voice signal is represented by a stream of digital samples at a fixed rate (64 Kb/s), the data rate that can be
achieved over analog telephone lines is limited.

ISDN


Integrated Service Digit
al Network (ISDN) extends the telephone company digital network by sending the digital
form of the signal all the way to the customer. ISDN is organized around 64Kb/s transmission speeds, the speed used
for digitized voice. An ISDN line was originally inte
nded to simultaneously transmit a digitized voice signal and a
64Kb/s data stream on a single wire. In practice, two channels are used to produce a 128Kb/s line, which is faster
than the 28.8Kb/s speed of typical computer modems but not adequate to handle
MPEG video.

ADSL

Asymmetric Digital Subscriber Lines (ADSL)
[33
-
35]

extend telephone company twisted
-
pair wiring to yet greater
speeds. The lines are asymmetric, with an
outbound data rate of 1.5 Mb/s and an inbound rate of 64 Kb/s. This is
suitable for video on demand, home shopping, games, and interactive information systems (collectively known as
interactive television), because 1.5 Mb/s is fast enough for compressed di
gital video, while a much slower ``back
channel'' is needed for control. ADSL uses very high
-
speed modems at each end to achieve these speeds over twisted
-
pair wire.

ADSL is a critical technology for the Regional Bell Operating Companies (RBOCs), because
it allows them to use
the existing twisted
-
pair infrastructure to deliver high data rates to the home.





Cable systems


Cable television systems provide analog broadcast signals on a coaxial cable, instead of through the air, with the
attendant freedom
to use additional frequencies and thus provide a greater number of channels than over
-
the
-
air
broadcast. The systems are arranged like a branching tree, with ``splitters'' at the branch points. They also require
amplifiers for the outbound signals, to make

up for signal loss in the cable. Most modern cable systems use fiber
optic cables for the trunk and major branches and use coaxial cable for only the final loop, which services one or two
thousand homes. The root of the tree, where the signals originate,
is called the
head end
.

Cable modems


Cable modems

are used to modulate digital data, at high data rates, into an analog 6
-
MHz
-
bandwidth TV
-
like signal.
These modems can transfer 20 to 40 Mb/s in a frequency bandwidth that would have been occupied by a single
analog TV signal, allowing multiple compressed

digital TV channels to be multiplexed over a single analog channel.
The high data rate may also be used to download programs or World Wide Web content or to play compressed video.
Cable modems are critical to cable operators, because it enables them to co
mpete with the RBOCs using ADSL.

Set
-
top box


The STB is an appliance that connects a TV set to a cable system, terrestrial broadcast antenna, or satellite broadcast
antenna. The STB in most homes has two functions. First, in response to a viewer's reques
t with the remote
-
control
unit, it shifts the frequency of the selected channel to either channel 3 or 4, for input to the TV set. Second, it is used
to restrict access and block channels that are not paid for. Addressable STBs respond to orders that come
from the
head end to block and unblock channels.





Admission control


Digital multimedia systems that are shared by multiple clients can deliver multimedia data to a limited number of
clients. Admission control is the function which ensures that once de
livery starts, it will be able to continue with the
required quality of service (ability to transfer isochronous data on time) until completion. The maximum number of
clients depends upon the particular content being used and other characteristics of the s
ystem.





Digital watermarks


Because it is so easy to transmit perfect copies of digital objects, many owners of digital content wish to control
unauthorized copying. This is often to ensure that proper royalties have been paid. Digital watermarking
[38
,
39]

consists of making small changes in the digital data that can later be used to det
ermine the origin of an unauthorized
copy. Such small changes in the digital data are intended to be invisible when the content is viewed. This is very
similar to the ``errors'' that mapmakers introduce in order to prove that suspect maps are copies of the
ir maps. In other
circumstances, a visible watermark is applied in order to make commercial use of the image impractical.

Lecture 7:

Multimedia architecture

In this section we show how the multimedia technologies are organized in order to create multimedi
a systems, which
in general consist of suitable organizations of clients, application servers, and storage servers that communicate
through a network. Some multimedia systems are confined to a stand
-
alone computer system with content stored on
hard disks o
r CD
-
ROMs. Distributed multimedia systems communicate through a network and use many shared
resources, making quality of service very difficult to achieve and resource management very complex.





Single
-
user stand
-
alone systems


Stand
-
alone multimedia sy
stems use CD
-
ROM disks and/or hard disks to hold multimedia objects and the scripting
metadata to orchestrate the playout. CD
-
ROM disks are inexpensive to produce and hold a large amount of digital
data; however, the content is static
--
new content requires

creation and physical distribution of new disks for all
systems. Decompression is now done by either a special decompression card or a software application that runs on
the processor. The technology trend is toward software decompression.





Multi
-
user
systems


Video over LANs


Stand
-
alone multimedia systems can be converted to networked multimedia systems by using client
-
server remote
-
file
-
system technology to enable the multimedia application to access data stored on a server as if the data were on a
l
ocal storage medium. This is very convenient, because the stand
-
alone multimedia application does not have to be
changed. LAN throughput is the major challenge in these systems. Ethernet LANs can support less than 10 Mb/s, and
token rings 16 Mb/s. This tra
nslates into six to ten 1.5Mb/s MPEG video streams. Admission control is a critical
problem. The OS/2
*

LAN server is one of the few products that support admission contr
ol
[40]
. It uses priorities with
token
-
ring messaging to differentiate between multimedia traffic and lower
-
priority data traffic. It also limits the
multimedia streams to
be sure that they do not sum to more than the capacity of the LAN. Without some type of
resource reservation and admission control, the only way to give some assurance of continuous video is to operate
with small LANs and make sure that the server is on th
e same LAN as the client. In the future, ATM and fast
Ethernet will provide capacity more appropriate to multimedia.

Direct Broadcast Satellite


Direct Broadcast Satellite (DBS), which broadcasts up to 80 channels from a satellite at high power, arrived i
n 1995
as a major force in the delivery of broadcast video. The high power allows small (18
-
inch) dishes with line
-
of
-
sight to
the satellite to capture the signal. MPEG compression is used to get the maximum number of channels out of the
bandwidth. The RCA
/Hughes service employs two satellites and a backup to provide 160 channels. This large
number of channels allows many premium and special
-
purpose channels as well as the usual free channels. Many
more pay
-
per
-
view channels can be provided than in conventi
onal cable systems. This allows enhanced pay
-
per
-
view,
in which the same movie is shown with staggered starting times of half an hour or an hour.

DBS requires a set
-
top box with much more function than a normal cable STB. The STB contains a demodulator to
reconstruct the digital data from the analog satellite broadcast. The MPEG compressed form is decompressed, and a
standard TV signal is produced f
or input to the TV set. The STB uses a telephone modem to periodically verify that
the premium channels are still authorized and report on use of the pay
-
per
-
view channels so that billing can be done.

Interactive TV and video to the home


Interactive TV a
nd video to the home
[2
-
5]

allow viewers to select, interact with, and control video play on a TV set
in real time. The user might be viewing a conventional movie, doing hom
e shopping, or engaging in a network game.
The compressed video flowing to the home

requires high bandwidth, from 1.5 to 6 Mb/s, while the return path, used
for selection and control, requires far lower bandwidth.

The STB used for interactive TV is simila
r to that used for DBS. The demodulation function depends upon the
network used to deliver the digital data. A microprocessor with memory for limited buffering as well as an MPEG
decompression chip is needed. The video is converted to a standard TV signal
for input to the TV set. The STB has a
remote
-
control unit, which allows the viewer to make choices from a distance. Some means are needed to allow the
STB to relay viewer commands back to the server, depending upon the network being used.

Cable systems a
ppear to be broadcast systems, but they can actually be used to deliver different content to each home.
Cable systems often use fiber optic cables to send the video to converters that place it on local loops of coaxial cable.
If a fiber cable is dedicated
to each final loop, which services 500 to 1500 homes, there will be enough bandwidth to
deliver an individual signal to many of those houses. The cable can also provide the reverse path to the cable head
end. Ethernet
-
like protocols can be used to share th
e same channel with the other STBs in the local loop. This
topology is attractive to cable companies because it uses the existing cable plant. If the appropriate amplifiers are not
present in the cable system for the back channel, a telephone modem can be
used to provide the back channel.

As mentioned above, the asymmetric data rates of ADSL are tailored for interactive TV. The use of standard twisted
-
pair wire, which has been brought to virtually every house, is attractive to the telephone industry. Howev
er, the
twisted pair is a more noisy medium than coaxial cable, so more expensive modems are needed, and distances are
limited. ADSL can be used at higher data rates if the distance is further reduced.

Interactive TV architectures are typically three
-
tier
, in which the client and server tiers interact through an application
server. (In three
-
tier systems, the tier
-
1 systems are clients, the tier
-
2 systems are used for application programs, and
the tier
-
3 systems are data servers.) The application tier is u
sed to separate the logic of looking up material in indexes,
maintaining the shopping state of a viewer, interacting with credit card servers, and other similar functions from the
simple function of delivering multimedia objects.

The key research question
s about interactive TV and video
-
on
-
demand are not computer science questions at all.
Rather, they are the human
-
factors issues concerning ease of the on
-
screen interface and, more significantly, the
marketing questions regarding what home viewers will fin
d valuable and compelling.

Internet over cable systems


World Wide Web browsing allows users to see a rich text, video, sound, and graphics interface and allows them to
access other information by clicking on text or graphics. Web pages are written in HyperText Markup Language
(HTML) and use an application comm
unications protocol called HTTP. The user responses, which select the next
page or provide a small amount of text information, are normally quite short. On the other hand, the graphics and
pictures require many times the number of bytes to be transmitted t
o the client. This means that distribution systems
that offer asymmetric data rates are appropriate.

Cable TV systems can be used to provide asymmetric Internet access for home computers in ways that are very
similar to interactive TV over cable. The data

being sent to the client is digitized and broadcast over a prearranged
channel over all or part of the cable system. A
cable modem

at the client end tunes to the right channel and
demodulates the information being broadcast. It must also filter the inform
ation destined for the particular station
from the information being sent to other clients. The low
-
bandwidth reverse channel is the same low
-
frequency band
that is used in interactive TV. As with interactive TV, a telephone modem might

be used for the rev
erse channel. The
cable head end is then attached to the Internet using a router. The head end is also likely to offer other services that
Internet Service Providers sell, such as permanent mailboxes. This asymmetric connection would not be appropriate
for

a Web server or some other type of commerce server on the Internet, because servers transmit too much data for
the low
-
speed return path. The cable modem provides the physical link for the TCP/IP stack in the client computer.
The client software treats th
is environment just like a LAN connected to the Internet.

Video servers on a LAN


LAN
-
based multimedia systems
[4
,
6
,
15]

go beyond the simple, client
-
server, remote file system type of video
server, to advanced systems that offer a three
-
tier architecture with

clients, application servers, and multimedia
servers. The application servers provide applications that interact with the client and select the video to be shown. On
a company intranet, LAN
-
based multimedia could be used for just
-
in
-
time education, on
-
lin
e documentation of
procedures, or video messaging. On the Internet, it could be used for a video product manual, interactive video
product support, or Internet commerce. The application server chooses the video to be shown and causes it to be sent
to the c
lient.

There are three different ways that the application server can cause playout of the video: By giving the address of the
video server and the name of the content to the client, which would then fetch it from the video server; by
communicating with t
he video server and having it send the data to the client; and by communicating with both to set
up the relationship.

The transmission of data to the client may be in
push mode

or
pull mode
. In push mode, the server sends data to the
client at the appropr
iate rate. The network must have quality
-
of
-
service guarantees to ensure that the data gets to the
client on time. In pull mode, the client requests data from the server, and thus paces the transmission.

The current protocols for Internet use are TCP and
UDP. TCP sets up sessions, and the server can push the data to the
client. However, the ``moving
-
window'' algorithm of TCP, which prevents client buffer overrun, creates
acknowledgments that pace the sending of data, thus making it in effect a pull protoco
l. Another issue in Internet
architecture is the role of firewalls, which are used at the gateway between an intranet and the Internet to keep
potentially dangerous or malicious Internet traffic from getting onto the intranet. UDP packets are normally neve
r
allowed in. TCP sessions are allowed, if they are created from the inside to the outside. A disadvantage of TCP for
isochronous data is that error detection and retransmission is automatic and required
--
whereas it is preferable to
discard garbled video d
ata and just continue.

Resource reservation is just beginning to be incorporated on the Internet and intranets. Video will be considered to
have higher priority, and the network will have to ensure that there is a limit to the amount of high
-
priority traf
fic that
can be admitted. All of the routers on the path from the server to the client will have to cooperate in the reservation
and the use of priorities.

Video conferencing


Video conferencing which will be used on both intranets and the Internet, uses

multiple data types, and serves
multiple clients in the same conference. Video cameras can be mounted near a PC display to capture the user's
picture. In addition to the live video, these systems include shared white boards and show previously prepared
vi
suals. Some form of mediation is needed to determine which participant is in control. Since the type of multimedia
data needed for conferencing requires much lower data rates than most other types of video, low
-
bit
-
rate video, using
approximately eight fra
mes per second and requiring tens of kilobits per second, will be used with small window
sizes for the ``talking heads'' and most of the other visuals. Scalability of a video conferencing system is important,
because if all participants send to all other p
articipants, the traffic goes up as the square of the number of participants.
This

can be made linear by having all transmissions go through a common server. If the network has a multicast
facility, the server can use that to distribute to the participants


LECTUER

8







MULTIMEDIA AUTHORING TOOL:



A multimedia authoring tool is a program that helps you write multimedia applications. A multimedia authoring tool
enables you to create a final application merely by linking together obje
cts, such as a paragraph of text, an
illustration, or a song. They are used exclusively for applications that present a mixture of textual, graphical, and
audio data.

With multimedia authoring software you can make video productions including CDs and DVDs
, design interactivity
and user interface, animations, screen savers, games, presentations, interactive training and simulations.


Types of authoring tools:


There are basically three types of authoring tools. These are as following.




Card
-

or Page
-
based Tools




Icon
-
based Tools




Time
-
based Tools



F
F
a
a
m
m
i
i
l
l
i
i
a
a
r
r


T
T
o
o
o
o
l
l
s
s




Word Processors


_ Microsoft Word


_ WordPerfect




Spreadsheets


_ Excel




Databases


_Q+E Database/VB




Presentation Tools


_ PowerPoint



M
M
u
u
l
l
t
t
i
i
m
m
e
e
d
d
i
i
a
a


S
S
o
o
f
f
t
t
w
w
a
a
r
r
e
e




Familiar Tools



Multimedia Authoring Tools



Elemental Tools


Card
-

or Page
-
based Tools



In these authoring systems, elements are organized as pages of a book or stack of cards.
The authoring system lets you l
ink these pages or cards into organized sequence and they also allow you to play
sound elements and launch animations and digital videos.

Page
-
based authoring systems are object
-
oriented: the
objects are the buttons, graphics and etc. Each object may conta
in a programming script activated when an

event

related to that object occurs.

EX: Visual Basic.


Icon
-
based Tools



Icon
-
based, event
-
driven

tools provide a visual programming approach to organizing and presenting multimedia.
First you build the flowchart of events, tasks and decisions by using appropriate icons from a library. These icons
can include menu choices, graphic images and sounds. W
hen the flowchart is built, you can add your content:
text, graphics, animations, sounds and video movies.

EX: Authoware Professional


Time
-
based Tools:




Time
-
based authoring tools are the most common of multimedia authoring tools. In these auth
oring systems,
elements are organized along a time line. They are the best to use when you have message with the beginning and
an end. Sequentially organized graphic
frames
are played back at the speed that you can set. Other elements (such
as audio events
) are triggered at the given time or location in the sequence of events.


EX: Animation Works Interactive


L
ECTURE

9:


Elemental Tools
:



Elemental tools

help us work with the important basic elements of your project: its graphics, images, sound, text
and moving pictures.

Elemental tools includes:



Painting And Drawing Tools



Cad And 3
-
D Drawing Tools



Image Editing Tools



OCR Software



Sound Editing Programs



Tools For Creating Animations And Digital Movies



Helpful Accessories



Painting And Drawing Tools:




Painting and drawing

tools are the most important items in your toolkit because the impact
of the graphics in your project will likely have

the greatest influence on the end user.


Painting software

is dedicated to producing excellent bitmapped images

.

Drawing software

is
dedicated to producing line art that is easily printed to paper. Drawing packages include
powerful and expensive computer
-
aided design (
CAD
)

software.

Ex: DeskDraw, DeskPaint, Designer




CAD And 3
-
D Drawing Tools




CAD (computer
-
aided design
) is a software used by architects, engineers, drafters,
artists


and others to create precision drawings or technical illustrations. It can be used to create two
-
dimensional
(2
-
D) drawings or three dimensional modules. T
he CAD images can spin about in space, with lighting conditions
exactly simulated and shadows properly drawn. With CAD software you can stand in front of your work and view
it from any angle, making judgments about its design
.

Ex: AutoCAD


Image Editing To
ols



Image editing applications

are specialized and powerful tools for enhancing and retouching
existing bitmapped images. These programs are also indispensable for rendering images used in multimedia
presentations. Modern versions of these programs also provide many of the features and

tools of painting and
drawing programs, and can be used to create images from scratch as well as images digitized from scanners,
digital cameras or artwork files created by painting or drawing packages.

Ex: Photoshop


OCR Software



Often you will have printed matter and other text to incorporate into your project, but no
electronic text file. With
Optical Character Recognition

(
OCR
) software, a flat
-
bed scanner and your computer you can save many hours of typing printed wo
rds and get
the job done faster and more accurately.

Ex: Perceive


Sound Editing Programs




Sound editing tools for both
digitized

and
MIDI
sound let you
see

music as well


as
hear

it. By drawing the representation of the sound in a waveform, you can cut, copy, paste and edit segments of the sound
with great precision and making your own sound effects.

Using editing tools to make your own MIDI files requires k
nowing about keys, notations and instruments and you
will need a MIDI synthesizer or device connected to the computer.

Ex: SoundEdit Pro


Tools For Creating Animations And Digital Movies



Animations and digital movies

are seque
nces of bitmapped graphic scenes (frames), rapidly
played back. But animations can also be made within an authoring system by rapidly changing the location of
objects to generate an appearance of motion.

Movie
-
making tools

let you edit and assemble video c
lips captured from camera, animations, scanned images,
other digitized movie segments. The completed clip, often with added transition and visual effects can be played
back.


Ex: Animator Pro

and SuperVideo Windows


Helpful Accessories



No multimedia toolkit is complete without a few indispensable utilities to perform
some odd, but repeated tasks. These are the
accessories.
For example a
screen
-
grabber

is essential, because
bitmap images are so

common in multimedia, it is important to have a tool for grabbing all or part of the screen
display so you can import it into your authoring system or copy it into an image editing application

LECTURE 10:


Anti
-
aliasing


One of the most important
techniques in making graphics and text easy to read and pleasing to the eye on
-
screen is
anti
-
aliasing. Anti
-
aliasing is a cheaty way of getting round the low 72dpi resolution of the computer monitor and
making objects appear as smooth as if they'd just st
epped out of a 1200dpi printer (nearly).


Take a look at these images. The letter a on the left is un
-
anti
-
aliased and looks coarse compared
with the letter on the right.


If we zoom in we can see better what's happening. Look at how the un
-
anti
-
aliase
d example below left breaks up
curves into steps and jagged outcrops. This is what gives the letter its coarse appearance. The example on the right is
the same letter, same point size and everything, but with anti
-
aliasing turned on in Photoshop's text too
l. Notice how
the program has substituted shades of grey around the lines which would otherwise be broken across a pixel.



But anti
-
aliasing is more than just making something slightly fuzzy so that you can't see the jagged edges: it's a way
of fooling the eye into seeing straight lines and smooth curves where there are none.

To see how anti
-
aliasing works, let's take a diago
nal line drawn across a set of pixels. In the example left the pixels
are marked by the grid: real pixels don't look like that of course, but the principle is the same.


Pixels around an un
-
anti
-
aliased line can only be part of the line or not part of it:
so the computer draws the line as a
jagged set of pixels roughly approximating the course of our original nice smooth line. (Trivia fact: anti
-
aliasing was
invented at MIT's Media Lab. So glad they do do something useful there....)


When the computer anti
-
aliases the line it works out how much of each in
-
between pixel would be covered by the
diagonal line and draws that pixel as an intermediate shade between background and foreground. In our simple
-
minded example here this is shades of grey. This close up t
he anti
-
aliasing is obvious and actually looks worse than
the un
-
anti
-
aliased version, but try taking your glasses off, stepping a few yards back from the screen and screwing up
your eyes a bit to emulate the effect of seeing the line on a VGA monitor cove
red in crud at its right size. Suddenly a
nice, smooth line pops into view.

So how does one go about anti
-
aliasing an image? Just be grateful you don't have to do it by hand. Most screen
design programs, including Photosop and Paintshop Pro include anti
-
al
ias options for things like text and line tools.
The important thing is simply to remember to do it, and to do it at the appropriate time.

There are far too many graphics out on the Web that are perfectly well
-
designed, attractive and fitted to their purpo
se
but end up looking amateurish because they haven't been anti
-
aliased. Equally, there are plenty of graphics that have
turned to visual mush because they've been overworked with the anti
-
alias tool.

Generally, I guess, the rules are these:

Always anti
-
al
ias text except when the text is very small. This is to taste but I reckon on switching off anti
-
aliasing in
Photoshop below about 12 points. If you're doing a lot with text this size, you really oughtn't be putting it in a graphic
but formatting ASCII ins
tead.

Always anti
-
alias rasterised EPSs (see the accompanying page for details). Except when you don't want to, of course.

If attempting to anti
-
alias something manually, or semi
-
manually, such as by putting a grey halo round a block black
graphic, then on
ly apply the effect at the last possible stage. And always, always, always bear in mind the target
background colour. It's a fat lot of good anti
-
aliasing a piece of blue text on a white background, if the target page is
orange, because the anti
-
aliased ha
lo is going to be shades of white
-
orange. I spent two hours re
-
colouring in a
logo

after doing exactly that.
Doh
!

Never confuse blur and anti
-
aliasing. The former is a great help in making things appear nice and s
mooth if applied to
specific parts of an image, but it'll make your image just look runny if used all over.

That's about it. Anti
-
aliasing is of immense importance, especially in turning EPSs into something pleasant to look at
onscreen, as I explain in the

next couple of pages.


LECTURE 11:


ANIMATION


Animation is achieved by adding motion to still image or objects, It may also be defined as the creation of moving
pictures one frame at a time.Animation grabs attention, and makes a multimedia product more
interesting and
attractive.


There are a few types of animation

Layout transition

It is the simplest form of animation

Example of transitions is spiral, stretch. Zoom

Process / information transition

Animation can be used to describe complex information/
process in an easier way.

Such as performing visual cues (e.g. how things work)

Object Movement

Object movement which are more complex animations such as animated gif or animated scenes.




How does animation works?




Animation is possible because of:
-

o

A biological phenomenon known as
persistence of vision



An object seen by human eye remains chemically mapped on the eye’s retina for a
brief time after viewing

o

A psychological phenomenon called
phi
.



Human’s mind need to conceptually complete the perceived
action.



The combination of persistence of vision and phi make it possible for a series of images that are
changed very slightly and very rapidly, one after another, to seemingly blend together into a visual
illusion of movement.



Eg. A few cells or frames o
f rotating logo, when continuously and rapidly changed, the arrow of
the compass is perceived to be spinning.







Still images are flashed in sequence to provide the illusion of animation.







The speed of the image changes is called the
frame rate
.



Film

is typically delivered at 24 frames per second (fps)



In reality the projector light flashes twice per frame, this increasing the flicker rate to 48 times per
second to remove any flicker image.



The more interruptions per second, the more continuous the be
am of light appears, the smoother
the animation.




Animation Techniques




Cel animation

o

a series of progressively different graphics are used for each frame of film.

o

Made famous by Disney.




Stop motion

o

Miniatures three
-
dimensional sets are used (stage,
objects)

o

Objects are moved carefully between shots.




Computer Animation (Digital cel & sprite animation)

o

Employ the same logic and procedural concept of cel animation

o

Objects are drawn using 3D modelling software.

o

Objects and background are drawn on differ
ent layers, which can be put on top of one
another.

o

Sprite animation


animation on moving object (sprite)



Computer Animation (Key Frame Animation)

o

Key frames are drawn to provide the pose a detailed characteristic of characters at
important points in the
animation



Eg. Specify the start and end of a walk, the top and bottom of the fall.

o

3D modelling and animation software will do the tweening process.

o

Tweening fill the gaps between the key frames and create a smooth animation.




Hybrid Technique

o

A technique

that mix cell and 3D computer animation. It may as well include life footage.




Kinematics

o

The study of motion of jointed structure (such as people)

o

Realistic animation of such movement can be complex. The latest technology use motion
capture for complex m
ovement animation




Morphing

o

The process of transitioning from one image to another.

When morphing, few key elements (such as a nose from both images) are set to share the

same location (one the final
image)



LECTURE 12:



VIDEO ON DEMAND




Video can add
great impact to your multimedia presentation due to its ability to draw people
attention.



Video is also very hardware
-
intensive (require the highest performance demand on your computer)

o

Storage issue: full
-
screen, uncompressed video uses over 20 megabytes
per second (MBps)
of bandwidth and storage space.

Processor capability in handling very huge data on real
time delivery



To

get the highest video performance, we should:

o

Use video compression hardware to allow you to work with full
-
screen, full
-
motion

video

.

o

Use a sophisticated audio board to allow you to use CD
-
quality sounds.

o

Install a Super fast RAID ( Redundant Array of Independent Disks) system that will
support high
-
speed data transfer rates.




Analog vs Digital Video




Digital video is
beginning to replace analog in both professional (production house and broadcast
station) and consumer video markets.



Digital video offer superior quality at a given cost.



Why?

o

Digital video reduces generational losses suffered by analog video.

o

Digital
mastering means that quality will never be an issue
s.




Obtaining Video Clip




If using analog video, we need to convert it to digital format first (in other words, need to digitize the
analog video first).



Source for analog video can come from:

o

Existing
video content



beware of licensing and copyright issues

o

Take a new footage (i.e. shoot your own video)



Ask permission from all the persons who appear or speak, as well as the
permission for the audio or music used.






How video works. (Video Basics)




Light passes through the camera lens and is converted to an electronic signal by a Charge Coupled
Device (CCD)



Most consumer
-
grade cameras have a single CCD.



Professional

grade cameras have three CCDs, one for each Red, Green and Blue color information



Th
e output of the CCD is processed by the camera into a signal containing three channels of color
information and synchronization pulse (sync).



If each channel of color information is transmitted as a separate signal on its own conductor, the signal
output i
s called RGB, which is the preferred method for higher
-
quality and professional video work.



LECTURE 13:

IMAGES:


An
image

(from
Latin

imago
) is an artifact, usually two
-
dimensional (a
picture
), that has a similar
appearance to some
subject

usually a physical object or a
person
.

The
word
image

is also used in the broader sense of any two
-
dimensional figure such as a
map
, a
graph
, a
pie chart
,
or an
abstract painting
. In this wider sense, images can

also be
rendered

manually, such as by
drawing
,
painting
,
carving
, rendered automatically by
printing

or
computer graphics

technology, or
developed

by a combination of
methods, especially in a
pseudo
-
photograph
.

A
volatile image

is one that exists only for a short period of time. This may be a reflection of an object by a mirror, a
projection

of a
camera obscura
, or a scene displayed on a
cathode ray tube
. A fixed image, also called a
hard copy
, is
one that has been recorded on a material object, such as
paper

or
textile

by
photography

or digital processes.

A
mental image

exists in an individual's min
d: something one remembers or imagines. The subject of an image need
not be real; it may be an abstract concept, such as a
graph
, function, or "imaginary" entity. For example,
Sigmund
Freud

claimed to have dreamt purely in aural
-
images of dialogues. The development of synthetic acoustic
technologies and the creation of
sound art

have led to a consideration of the possibilities of a
sound
-
image

made up of
irred
ucible phonic substance beyond linguistic or musicological analysis.


Still image

A still image is a single
static

image, as distinguished from a moving image (see below). This phr
ase is used in
photography
, visual
media

and the
computer industry

to emphasize that one is not talking about movies, or in very
precise or pedantic technical writing such as a
standard
.

A
film still

is a photograph taken on the set of a movie or television program during production, used for promotional
purposes.

CAPTURING AND EDITING IMAGE:


The image you see on your monitor is a bitmap stored in a video memory, updated about every
1/60 of a second or faster, depending upon your monitor’s scan rate. As you assemble image for your multimedia
project, you may n
eed to capture and store an image directly from your screen. The simplest way to capture what you
see on your screen is to press any proper key on your keyboard.



Both the Macintosh and window environment have a
clipboard

where text and images are temporar
ily
stored when you cut copy them within an application. In window when you press print screen a copy of
your screen image go to clipboard. From clipboard you can paste the captured bitmap image to an
application.



Screen capture

utility for Macintosh and

window go to a step further and are indispensable to the
multimedia artists.
With a key stroke they let you select an area of screen and save
the selection in various
other format.

Image editing


When manipulating bitmap is to use an image
-
editing pro
gram. There is king of Mountain program
that let you not only retouch the image but also do tricks like placing your face at the helm of a square
-
rigger.


In addition to letting you to enhance and make composite image, image editing will also allow
you to alter and distort the image.

a colored image of red rose can be changed into blue rose or purple
rose. Morphing is the other effect tha
t can be used to manipulate the still images or can be used to create
bizarre animated transformations. Morphing allow you to smoothly blend two image so that one image is
seemed to be melted into the other image.


SCANNING IMAGE:




Document scanning

or
image scanning

is the action or process of converting text and
graphic
paper
documents
,
photographic film
,
photographic paper

or other files to
digital images
. This "analog" to "digital"
conversion process (A<D) is required for computer users to be able to view electronic files.

LECTURE:14

Color palette
:




In
computer graphics
, a
palette

is either a given, finite set of
colors

for the management of
digital
images

(that is, a
color palette
), or a small on
-
screen graphical element for choosing from a limited set of choices,
not necessarily colors (such as a
tools palette
).



Depending on the context (an engineer's technical specification, an advertisement, a programmers' guide,
an image file specification, a user's manual, etc.) the term
palette

and related terms such as
Web palette

and
RGB
palette
, for example,
can have somewhat different meanings.

The following are some of the widely used meanings for
color palette

in computing.



The total number of colors that a given system is able to generate or manage; the term
full palette

is often
encountered in this sense.

For example,
Highcolor

displays are said to have a
16
-
bit RGB palette
.



The limited fixed color selection that a given display adapter can offer when its
hardware registers

are
appropriately set (
fixed palette

selecti
on). For example, the
Color Graphics Adapter

(CGA) can be set to show
the so
-
called palette #1 or the palette #2 in color graphic mode: two combinations of
4 fixed colors

each.



The limited selection of colors that a given system is able to display simultaneously, generally picked from a
wider
full

palette;
selected colors

or
picked colors

are also used. In this case, the color selection is always
chosen by
software
, both by the user or by a program. For example, the standard
VGA

display adapter is said
to provide a palette
of
256 simultaneous colors

from a total of 262,144 different colors.



The
hardware registers

of the display subsystem in which the selected colors' values are to be loaded in order
to show them, also referred as the
hardware palette

or
Color Look
-
Up Table

(CLUT). For example, the
hardware registers of the
Commodore Amiga

are known both as their color palette and their CLUT,
depending on sources.



A given color selection officially standardized by some body or corporation;
default palette

or
system palette

are also used for this meaning. For example, the well known
Web colors

fo
r use with Internet
browsers
, or the
Microsoft Windows

default palette
.



The limited color selection inside a given
indexed
color

image file

as
GIF
, for example, although the
expressions
color table

or
color map

are also generally used.

vector images



A vector image is made up of a series of mathematical instructions. These instructions define the lines
and shape
s that make up a vector image. As well as shape, size and orientation, the file stores information about the
outline and fill colour of these shapes.

Features common to vector images



Scalability/resolution independence
-

images display at any size without
loss of detail



File sizes are usually smaller than raster images