1: Basics of Multimedia Technology:
Computers, communication and entertainment
In the era of information technology, we are dealing with free flow of information
with no barriers of distance. Take
the case of internet as one of the
examples. You can view and download information across the globe within
reasonable time, if you have a good speed of connection.
Let us spend a bit of time to think in which form we access the information. The
simplest and the most common of
ese is the printed text. In every web page,
some text materials are always present, although the volume of textual
may vary from page to page. The text materials are supported with graphics, still
pictures, animations, video
clippings, audio comme
ntaries and so on. All or at
least more than one of these media, which we can collectively call
“multimedia”, are inevitably present to convey the information which the web site
developers want to do, for the
benefit of the world community at large. All
media are therefore utilized to present the information in a
meaningful way, in an
Internet is not the only kind of information dissemination involving multiple media.
Let us have a look at some other
examples as well. In televisi
on, we have
involvement of two media
audio and video, which should be presented
in a synchronized manner. If we present the audio ahead of video or video ahead
of audio in time, the results
are far from being pleasant. Loss of lip
n is noticeable, even if the audio and the video presentations
by just 150 milliseconds or more. If the time lead or the lag is in order of seconds,
one may totally lose the
purpose of the presentation. Say, in some distance
learning program, the te
acher is explaining something which is
written down on a
blackboard. If the audio and the video differ in time significantly, a student will not
be able to
follow the lecture at all.
So, television is also a multimedia and now, we understand one more
uirement of multimedia signals. The
multimedia signals must be synchronized
and if it is not possible to make them absolutely synchronized, they should
least follow a stringent specification by which lack of synchronization can be
is an example, where there is only unidirectional flow of multimedia
from the transmitter to
the receiver. In standard broadcast
television, there is no flow of information in the reverse direction, unless you use
a different device and a ch
say, by talking to the television presenter over
telephone. In internet of course, you
have interactivity in the sense that you can
navigate around the information and make selections through hyperlinks,
of the flow of information is happen
ing from the web server to the users.
In some applications, we require
free flow of multimedia signals between two or
more nodes, as is the case with video conferencing, or what we
appropriately call a multimedia teleconferencing. In case of
conferencing, the nodes
physically located in different parts of a globe will be
equipped with microphones, cameras, a computer supporting
texts, graphics and
animations and may be other supporting devices if required. For example,suppose five e
doctors across the continents are having a live medical
conference to discuss about a patient’s condition. The doctors
should not only
see and talk to each other, all of them should observe the patient at the same
time, have access to the
medical history, live readings and graphs of the
monitoring instruments, visual rendering of the data etc. In a
teleconferencing application of this nature, one must ensure that the end
delays and the turnaround
time is minimum. Moreover,
between different nodes should not be different very significantly
with respect to
each other, that is, the delay jitters must be small
Multimedia an introduction
What is Multimedia?
A simple definition of multimedia is
‘multimedia can be any combination of text, graphics, sound, animation
and video, to effectively communicate ideas to users’
Multimedia = Multi + media
Multi = many
Media = medium or means by which information is stored,
transmitted, presented or perc
Other definition of Multimedia area
“Multimedia is any combination of text, graphic art, sound, animation and video delivered to you by computer or
other electronic means.”
“Multimedia is the presentation of a (usually i
nteractive) computer application, incorporating media
elements such as text, graphics, video, animation and sound on computer.”
Types of Multimedia Presentation
Multimedia presentation can be categorize into 2
; linear Multimedia
In linear Multimedia the users have very little control over the presentation. The users would only sit back and watch the
presentation The presentation normally plays from the start to end or even loops continually to presen
t the information.
A movie is a common type of linear multimedia
In interactive Multimedia, users dictate the flow of delivery.
The users control the delivery of elements
and control the
Users have the ability to
move around or follow different path through the information presentation.
The advantage of interactive Multimedia is that complex domain of information can easily be
presented. Whereas the disadvantage is, users might get lost in the massive “information
Interactive Multimedia is useful for: information archive (encyclopedia), education,
training and entertainment.
Multimedia System Characteristic
4 major characteristics of Multimedia System are
Multimedia systems must be
All multimedia components are
The interface to the final user may permit
The information must be represented
Computer is used for
the content of the information
e.g. by using the authoring tools, image editor,
sound and video editor
providing large and shared capacity for multimedia information.
through the network.
the information to the end user
make direct use of computer peripheral such
as display device (monitor) or sound generator (speaker).
All multimedia components (audio, video, text, graphics) used in the system must be somehow
Every device, such as microphone and camera is connected to and controlled by a
A single type of digital storage is used for all media type.
Video sequences are shown on computer screen instead of TV monitor.
Three levels of interactivity:
Interactivity strictly on information delivery. Users
select the time at which the presentation starts, the
order, the speed and the form of the presentation itself.
Users can modify or enrich the c
ontent of the
information, and this modification is recorded.
Actual processing of users input and the computer
generate genuine result based on the users input.
Digitization : process involved in transforming an analog
signal to digital signal.
Framework for multimedia systems
For multimedia communication, we have to make judicious use of all the media at
our disposal. We have audio,
video, graphics, texts
all these media as the
first and foremost, we need a system which can acquire the
separate media streams, process them together to make it an integrated
Now, click at the link
given below to view the elements involved in a multimedia
transmitter, as shown in
Devices like cameras, microphones, keyboards, mouse, touch
medium etc. are required to feed inputs
from different sources. All further
processing till the transmission is done by the computer. The data acquisition
edia is followed by data compression to eliminate inherent
redundancies present in the media streams. This
is followed by inter
synchronization by insertion of time
stamps, integration of individual media
finally the transmission of integ
rated multimedia stream through a
communication channel, which can be a wired or a
The destination end should have a corresponding interface to receive the
stream through the communication channel. At the
reversal of the processes involved during transmission
Now, click at the link below to view the elements involved in a multimedia
receiver, as shown in Fig.1.2.
The media extractor separates the integrated media stream into individual media
streams, which undergoes de
compression and then presented in a
synchronized manner according to their time
stamps in different playback units,
such as monitors, loudspeakers, printers/plotters, recording devices etc.
The subject of multimedia is studied
from different perspectives in different
universities and institutes. In some of the
multimedia courses, the emphasis is on
how to generate multimedia production, the authoring tools and the software
associated with these etc. In this course, we don’t cov
er any of the multimedia
production aspects at all. Rather, this
course focuses on the multimedia systems
and the technology associated with multimedia signal processing and
communication. We have already posed the technical challenges. In the coming
ns, we shall see in details how
these challenges, such as compression, synchronization etc can be overcome, how the multimedia standards have
designed to ensure effective multimedia communication, how to integrate the
associated media and how to
and retrieve multimedia sequences. Other
than the introduction, this course has been divided into the following
(i) Basics of Image Compression and Coding.
(ii) Orthogonal Transforms for Image Compression.
(iii) Temporal Redundancies in Video
time Video Coding.
(v) Multimedia standards.
(vi) Continuity and synchronization.
(vii) Audio Coding.
(viii) Indexing, Classification and Retrieval.
(ix) Multimedia applications.
computer, monitor, disk drive, video disk driver etc there are many wires and connections to
resemble the intensive care
ward of hospital.
The equipment required for multimedia project depend on the content of the project as well as its design. If
find content such as sound effects, music, graphic arts, quick time or AVI movies to use in your project, you may not
need extra tools for making your own.
Multimedia developer have separate equipments for digitizing sound from
microphone or taps
, scanning photos from other printer matters.
Standard parallel port
Ultra 2 SCSI
Wide Ultra 2 SCSI
Ultra 3 SCSI
Wide ultra 3 SCSI
Small Computer System Interface
), is a set of standards for physically
connecting and transferring data between computers and
. The SCSI standards define
protocols, and electrical and optical
. SCSI is most commonly used for hard disks and tape drives, but it can
connect a wide range of other devices, including scanners and
. The SCSI standard defines command sets
peripheral device types
; the p
resence of "unknown" as one of these types means that in theory it can be
used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial
SCSI is an intelligent interface: it hides the complexit
y of physical format. Every device attaches to the SCSI
bus in a similar manner.
SCSI is a peripheral interface: up to 8 or 16 devices can be attached to a single bus. There can be any number
of hosts and peripheral devices but there should be at least one host.
SCSI is a buffered interface: it uses hand shake signals between devices,
2 have the option of
parity error checking. Starting with SCSI
U160 (part of SCSI
3) all commands and data are error checked by
SCSI is a peer to peer interface: the SCSI protocol defines communication from host to host, host to a
peripheral device, peripheral device to a peripheral device. However most peripheral devices are exclusively
, incapable of acting as
unable to initiate SCSI transactions themselves.
ications are uncommon, but possible in most SCSI applications.
53C810 chip is an example of a
that can act as a SCSI target.
IDE, ATA, EIDE, ultra ATA, Ultra IDE
Advanced Technology Attachment
) standard is a standard interface that allows you to connect storage
peripherals to PC computers. The ATA standard was developed on May 12, 1994 by the ANSI (document X3.221
Despite the official name "ATA", this standard is better known by the com
The ATA standard was originally intended for connecting
, however an extension called
) was developed in order to be able to interface other storage peripherals (
, etc.) on an ATA
) has emerged, which allows you to transfer data over a serial
link, the term "
) sometimes replaces the term "ATA" in order to differentiate
between the two standards.
The ATA standard allows you to connect storage peripherals directly with the motherboard thanks to a
which is generally made up of 40 para
llel wires and three connectors (usually a blue connector for the motherboard
and a black connector and a grey connector for the two storage peripherals).
On the cable, one of the peripherals must be declared the
cable and the other the
. It is understood that
the far connector (black) is reserved for the master peripheral and the middle connector (grey) for the slave
peripheral. A mode called
) allows you to automatically define the master and
peripherals as long as the computer's
supports this functionality.
Universal Serial Bus (USB) connects more than computers and peripherals. It has the power to connect you with a
new world of PC experiences.
USB is your instant connection to the fun of digital photography or the limitless
creative possibilities of digital imaging. You can use USB to connect with other people through the power of PC
telephony and video conferencing
. Once you've tried USB, we think you'll grow quite attached to it!
Universal Serial Bus
) is a
devices to a host computer. USB was
designed to allow many peripherals to be connected using a single standardized interface socket and to improve the
Plug and play
capabilities by allowing
, that is, by allowing devices to be connected and disconnected
the computer or turning off the device. Other convenient features include providing power to low
consumption devices without the need for an external power supply and allowing many devices to be used without
rer specific, individual
to be installed.
USB is intended to replace many legacy varieties of
. USB can connect
, personal media players,
. For many of those devices USB has become the standard connection method. USB was originally
, but it has become commonplace on other devices such as PDAs and
, and as a bridging
between a device and an
plugged into a
purposes. As of 2008
, there are about 2 billion USB devices in the world.
The design of USB is standardized by the
USB Implementers Forum
IF), an industry standards body
incorporating leading companies from the computer and electronics industries. Notable members have included
(now merged with
Presentation of FireWire Bus (IEEE 1394)
bus (name of the standard to which it makes reference) was developed at the end of 1995 in order to provide an
interconnection system that allows data to circulate at a high speed and in real time. The company
gave it the commercial
which is how it is most commonly known. Sony also gave it commercial name,
preferred to call it
FireWire is a port that exists on some computers that allows you to connect peripherals (particularly digital cameras) at a v
high bandwidth. There are expansion boards (generally in
PC Card / PCMCIA
format) that allow you to equip a computer
with FireWire connectors. FireWire connectors and cables can be easily spotted thanks to their shape as well as the following
There are different FireWire connectors for each of the IEEE 1394 standards.
The IEEE 1394a standard specifies two connectors:
because they are used on Digital Video (DV) cameras:
The IEEE 1394b standard specifies two types of connectors that are designed so that 1
Beta cables can be plugged into
Beta and Bilingual connectors, but 1394b Bilingual cables can only be plugged into Bilingual connectors:
1394b Beta connectors
1394b Bilingual connectors
How the FireWire Bus Works
The IEEE 1394 bus has about t
he same structure as the
except that it is a cable made up of six wires (2 pairs for the
data and the clock and 2 wires for the power supply) that allow it to reach a bandwidth of 800 Mb/s
(soon it should be able to
reach 1.6 Gb/s, or even 3.2 Gb/s down the road). The two wires for the clock is the major difference between the USB bus and
the IEEE 1394 bus, i.e. the possibility to operate in two transfer modes:
Asynchronous transfer mode
his mode is based on a transmission of packets at variable time intervals. This means that the
host sends a data packet and waits to receive a receipt notification from the peripheral. If the host receives a receipt noti
it sends the next data pac
ket. Otherwise, the first packet is resent after a certain period of time.
: this mode allows data packets of specific sizes to be sent in regular intervals. A node called
is in charge of sending a synchronisation packet (called a
Cycle Start packet
) every 125 microseconds. This way, no receipt
notification is necessary, which guarantees a set bandwidth. Moreover, given that no receipt notification is necessary, the
of addressing a peripheral is simplified and the saved bandwidth allows you to gain throughput.
Another innovation of the IEEE 1394 standard: bridges (systems that allow you to link buses to other buses) can be used.
Peripheral addresses are set with a node (i.e. peripheral) identifier encoded on 16 bits. This identifier is divided into two
bit field that identifies the bridge and a 6
bit field that specifies the node. Therefore, it is possible to connect 1,023 bridges (or
1) on which there can be 63 nodes (or 2
1), which means it is possible to address 65,535 peripherals! The IEEE
standard allows hot swapping. While the USB bus is intended for peripherals that do not require a lot of resources (e.g. a mo
or a keyboard), the IEEE 1394 bandwidth is larger and is intended to be used for new, unknown multimedia (video acquisiti
MEMORY STORAGE DEVICE:
To estimate the memory requirement of a multimedia project is the space required on any floppy or a hard
disk or CD ROM not on the random access memory. While your computer is running you must h
ave sense of the
project content, color image, text
and programming code that glue it all together to the required memory.
If you are making a multimedia you must also need memory for storing and archiving working files used during
production, audio and vi
(usually known by its
) is a form of
computer data storage
. Today it takes
the form of
that allow the stored
to be acces
sed in any order (i.e., at
). The word
thus refers to the fact that any piece of data can be returned in a
, regardless of its physical
location and whether or not it is related to the previous piece of data.
(usually known by
) is a class of
media used in
electronic devices. Because da
ta stored in ROM cannot be modified (at least not very quickly or easily), it is mainly
used to distribute
In its strictest sense, ROM refers only to
(the oldest type of
ROM), which is
with the desired data permanently stored in it, and thus can never be modified. However,
more modern types such as
can be erased and re
programmed multiple times; they are
still described as "read
only memory"(ROM) because the reprogramming
process is generally infrequent,
comparatively slow, and often does not permit
writes to individual memory locations. Despite the
simplicity of mask ROM,
economies of scale
often make reprogrammable technologies
more flexible and inexpensive, so that mask ROM is rarely used in new products as of 2007
Zip, Jaz, Syquest, optical storage device:
Fro years the Sequest 44 MB removable cartridges were the most widely used portable medium among multimedia
developers. Zip driver with there likewise inexpensive 100MB, 250MB, 750 MB cartridge built on a
technology, significantly presented Syquest’s Market share fro removable media. Lomega’s Jaz cartridge , built
based on hard drive technology provide one or two gegebytes of removable storage media, and have fast enough
transfer rate for mult
Other storage device are:
Digital versatile disc
Flash or thumb Drive
Magnetic card encoders and reader
Presentation of audio and visual component of your multimedia project requires hardware that may not be
attached with the computer by itself, such as speaker, am
plifier, monitor. There is no greater test of benefit of
good output hardware then to feed to feed your audio output of your computer in your external amplifier system.
Some of your output device are:
Amplifier and Speaker
Portable Media Pla
workgroup member ands with the client is essential for the efficient and assured
completion of project. When you need it immediate, an internet connection is requir
ed. If you and your client
both are connected via internet
, a combination of communication by e
mail and FTP(file transfer protocol )
may be the most cost effective and efficient solution for creative developer and manager Various
communication device are
ISDN and DSL
AUDIO, CD ROM, CD
CD Audio (or CD
The first widely used CDs were music CDs that appeared in the early 1980s.The format for storing
recorded music in digital form, as on CDs that are commonly found in music stores. The red book
standards, created by Sony and Phillips indicate specifications
such as the size of the pits and lands, how
the audio is organized and where it is located on the CD, as well as how errors are corrected.Using
compression techniques CD Audio discs can hold up to 75 minutes of sound. To provide the highest
quality, the m
usic is sampled at 44.1 kHz, 16
bit stereo. Because of the high
quality sound of audio CDs,
they quickly became very popular. Other CD formats evolved from the Red Books standards.
Although the Red Book standards were excellent fo
r audio, they were useless for data, text, graphics and
he Yellow Book standards built upon the Red Book, adding specifications for a track to accommodate data,
thus establishing a format for storing data, including video and audio, in digital form
on a compact disc.CD
also provided a better error
checking scheme, which is important for data.One drawback of the Yellow Book
standards is that they allowed various manufacturers to determines their own method of organizing and accessing
data. This le
d to incompatibilities across computer platforms.
For the first few years of its existence, the Compact
Disc was a medium used purely for audio. However, in 1985 the
ROM standard was established by
, which defined a non
volatile optical data
computer data storage
medium using the same physical
format as audio compact discs, readable by a computer with a CD
t Disc Interactive
is the name of an interactive multimedia CD player developed and marketed
Royal Philips Electronics N.V.
i also refers to the multimedia
standard utilized by the CD
console, also known as
in 1986 (not to be confused with
MMCD, the pre
format also co
developed by Philips and Sony). The first Philips CD
i player, released in 1991
and initially priced around
$700, is capable of playing interactive CD
(CD+Graphics), Karaoke CDs, and
(VCDs), though the latter requires an optional "Digital Video Card" to
veloped by Phillips in 1986, the specifications for CD
I, were published in
I is a platform
specific format; it requires a CD
I player, with a proprietary operating system, attached to a
television set. Because of the need for specific CD
I hardware, this format has had only marginal success in the
et. One of the benefits of CD
I is its ability to synchronize sound and pictures on a single track of the
PRESENTATION DEVICE AND USER INTERFACE IN MULTIMEDIA
Lans and multimedia internet, World Wide Web & multimedia distribution
ATM & ADSL
Telephone networks dedicate a set of resources that forms a complete path from end to end for the duration of the
telephone connection. The dedicated path guarantees that the voice data can be delivered from one end to the other
nd in a smooth and timely way, but the resources remain dedicated even when there is no talking. In contrast, digital
packet networks, for communication between computers, use time
shared resources (links, switches, and routers) to
send packets through the
network. The use of shared resources allows computer networks to be used at high
utilization, because even small periods of inactivity can be filled with data from a different user. The high utilization
and shared resources create a problem with respect t
o the timely delivery of video and audio over data networks.
Current research centers around reserving resources for time
sensitive data, which will make digital data networks
more like telephone voice networks.
The Internet and intranets, which use the TCP protocol suite, are the most important delivery vehicles for multimedia
objects. TCP provides communication sessions between applications on hosts, sending streams of bytes for which
delivery is always guarante
ed by means of acknowledgments and retransmission. User Datagram Protocol (UDP) is a
effort'' delivery protocol (some messages may be lost) that sends individual messages between hosts. Internet
technology is used on single LANs and on connected LA
Ns within an organization, which are sometimes called
intranets, and on ``backbones'' that link different organizations into one single global network. Internet technology
allows LANs and backbones of totally different technologies to be joined together in
to a single, seamless network.
Part of this is achieved through communications processors called routers. Routers can be accessed from two or more
networks, passing data back and forth as needed. The routers communicate information on the current network
topology among themselves in order to build routing tables within each router. These tables are consulted each time a
message arrives, in order to send it to the next appropriate router, eventually resulting in delivery.
is a hardware architecture for passing packets between stations on a LAN. Since a single circular
communication path is used for all messages, there must be a way to dec
ide which station is allowed to send at any
time. In token ring, a ``token,'' which gives a station the right to transmit data, is passed from station to station. The
data rate of a token ring network is 16 Mb/s.
LANs use a common wire to transmit data from station to station. Mediation between transmitting
stations is done by having stations listen before sending, so that they will not interfere wit
h each other. However, two
stations could begin to send at the same time and collide, or one station could start to send significantly later than
another but not know it because of propagation delay. In order to detect these other situations, stations cont
listen while they transmit and determine whether their message was possibly garbled by a collision. If there is a
collision, a retransmission takes place (by both stations) a short but random time later. Ethernet LANs can transmit
data at 10 Mb/s.
However, when multiple stations are competing for the LAN, the throughput may be much lower
because of collisions and retransmissions.
Switches may be used at a hub to create many small LANs where one large one existed before. This redu
contention and permits higher throughput. In addition, Ethernet is being extended to 100Mb/s throughput. The
, is much more appropriate to multimedia than regular Ethernet, because existing
Ethernet LANs can support only a
bout six MPEG video streams, even when nothing else is being sent over the LAN.
Asynchronous Transfer Mode
is a new packet
network protocol designed for mixing voice, video, and
data within the same network. Voice is digitized in telephone networks at 64 Kb/s (kilobits per second), which must
be delivered with minimal dela
y, so very small packet sizes are used. On the other hand, video data and other
business data usually benefit from quite large block sizes. An ATM packet consists of 48 octets (the term used in
communications for eight bits, called a byte in data processin
g) of data preceded by five octets of control information.
An ATM network consists of a set of communication links interconnected
by switches. Communication is preceded
by a setup stage in which a path through the network is determined to establish a circu
it. Once a circuit is established,
octet packets may be streamed from point to point.
ATM networks can be used to implement parts of the Internet by simulating links between routers in separate
intranets. This means that the ``direct'' intranet connect
ions are actually implemented by means of shared ATM links
ATM, both between LANs and between servers and workstations on a LAN, will support data rates that will allow
many users to make use of motion video on a LAN.
Modulator/demodulators, or modems, are used to send digital data over analog channels by means of a carrier signal
(sine wave) modulated by changing the frequency, phase, amplitude, or some combination of them in order to
al data. (The result is still an analog signal.) Modulation is performed at the transmitting end and
demodulation at the receiving end. The most common use for modems in a computer environment is to connect two
computers over an analog telephone line. Beca
use of the quality of telephone lines, the data rate is commonly limited
to 28.8 Kb/s. For transmission of customer analog signals between telephone company central offices, the signals are
sampled and converted to ``digital form'' (actually, still an anal
og signal) for transmission between offices. Since the
customer voice signal is represented by a stream of digital samples at a fixed rate (64 Kb/s), the data rate that can be
achieved over analog telephone lines is limited.
Integrated Service Digit
al Network (ISDN) extends the telephone company digital network by sending the digital
form of the signal all the way to the customer. ISDN is organized around 64Kb/s transmission speeds, the speed used
for digitized voice. An ISDN line was originally inte
nded to simultaneously transmit a digitized voice signal and a
64Kb/s data stream on a single wire. In practice, two channels are used to produce a 128Kb/s line, which is faster
than the 28.8Kb/s speed of typical computer modems but not adequate to handle
Asymmetric Digital Subscriber Lines (ADSL)
extend telephone company twisted
pair wiring to yet greater
speeds. The lines are asymmetric, with an
outbound data rate of 1.5 Mb/s and an inbound rate of 64 Kb/s. This is
suitable for video on demand, home shopping, games, and interactive information systems (collectively known as
interactive television), because 1.5 Mb/s is fast enough for compressed di
gital video, while a much slower ``back
channel'' is needed for control. ADSL uses very high
speed modems at each end to achieve these speeds over twisted
ADSL is a critical technology for the Regional Bell Operating Companies (RBOCs), because
it allows them to use
the existing twisted
pair infrastructure to deliver high data rates to the home.
Cable television systems provide analog broadcast signals on a coaxial cable, instead of through the air, with the
to use additional frequencies and thus provide a greater number of channels than over
broadcast. The systems are arranged like a branching tree, with ``splitters'' at the branch points. They also require
amplifiers for the outbound signals, to make
up for signal loss in the cable. Most modern cable systems use fiber
optic cables for the trunk and major branches and use coaxial cable for only the final loop, which services one or two
thousand homes. The root of the tree, where the signals originate,
is called the
are used to modulate digital data, at high data rates, into an analog 6
These modems can transfer 20 to 40 Mb/s in a frequency bandwidth that would have been occupied by a single
analog TV signal, allowing multiple compressed
digital TV channels to be multiplexed over a single analog channel.
The high data rate may also be used to download programs or World Wide Web content or to play compressed video.
Cable modems are critical to cable operators, because it enables them to co
mpete with the RBOCs using ADSL.
The STB is an appliance that connects a TV set to a cable system, terrestrial broadcast antenna, or satellite broadcast
antenna. The STB in most homes has two functions. First, in response to a viewer's reques
t with the remote
unit, it shifts the frequency of the selected channel to either channel 3 or 4, for input to the TV set. Second, it is used
to restrict access and block channels that are not paid for. Addressable STBs respond to orders that come
head end to block and unblock channels.
Digital multimedia systems that are shared by multiple clients can deliver multimedia data to a limited number of
clients. Admission control is the function which ensures that once de
livery starts, it will be able to continue with the
required quality of service (ability to transfer isochronous data on time) until completion. The maximum number of
clients depends upon the particular content being used and other characteristics of the s
Because it is so easy to transmit perfect copies of digital objects, many owners of digital content wish to control
unauthorized copying. This is often to ensure that proper royalties have been paid. Digital watermarking
consists of making small changes in the digital data that can later be used to det
ermine the origin of an unauthorized
copy. Such small changes in the digital data are intended to be invisible when the content is viewed. This is very
similar to the ``errors'' that mapmakers introduce in order to prove that suspect maps are copies of the
ir maps. In other
circumstances, a visible watermark is applied in order to make commercial use of the image impractical.
In this section we show how the multimedia technologies are organized in order to create multimedi
a systems, which
in general consist of suitable organizations of clients, application servers, and storage servers that communicate
through a network. Some multimedia systems are confined to a stand
alone computer system with content stored on
hard disks o
ROMs. Distributed multimedia systems communicate through a network and use many shared
resources, making quality of service very difficult to achieve and resource management very complex.
alone multimedia sy
stems use CD
ROM disks and/or hard disks to hold multimedia objects and the scripting
metadata to orchestrate the playout. CD
ROM disks are inexpensive to produce and hold a large amount of digital
data; however, the content is static
new content requires
creation and physical distribution of new disks for all
systems. Decompression is now done by either a special decompression card or a software application that runs on
the processor. The technology trend is toward software decompression.
Video over LANs
alone multimedia systems can be converted to networked multimedia systems by using client
system technology to enable the multimedia application to access data stored on a server as if the data were on a
ocal storage medium. This is very convenient, because the stand
alone multimedia application does not have to be
changed. LAN throughput is the major challenge in these systems. Ethernet LANs can support less than 10 Mb/s, and
token rings 16 Mb/s. This tra
nslates into six to ten 1.5Mb/s MPEG video streams. Admission control is a critical
problem. The OS/2
LAN server is one of the few products that support admission contr
. It uses priorities with
ring messaging to differentiate between multimedia traffic and lower
priority data traffic. It also limits the
multimedia streams to
be sure that they do not sum to more than the capacity of the LAN. Without some type of
resource reservation and admission control, the only way to give some assurance of continuous video is to operate
with small LANs and make sure that the server is on th
e same LAN as the client. In the future, ATM and fast
Ethernet will provide capacity more appropriate to multimedia.
Direct Broadcast Satellite
Direct Broadcast Satellite (DBS), which broadcasts up to 80 channels from a satellite at high power, arrived i
as a major force in the delivery of broadcast video. The high power allows small (18
inch) dishes with line
the satellite to capture the signal. MPEG compression is used to get the maximum number of channels out of the
bandwidth. The RCA
/Hughes service employs two satellites and a backup to provide 160 channels. This large
number of channels allows many premium and special
purpose channels as well as the usual free channels. Many
view channels can be provided than in conventi
onal cable systems. This allows enhanced pay
in which the same movie is shown with staggered starting times of half an hour or an hour.
DBS requires a set
top box with much more function than a normal cable STB. The STB contains a demodulator to
reconstruct the digital data from the analog satellite broadcast. The MPEG compressed form is decompressed, and a
standard TV signal is produced f
or input to the TV set. The STB uses a telephone modem to periodically verify that
the premium channels are still authorized and report on use of the pay
view channels so that billing can be done.
Interactive TV and video to the home
Interactive TV a
nd video to the home
allow viewers to select, interact with, and control video play on a TV set
in real time. The user might be viewing a conventional movie, doing hom
e shopping, or engaging in a network game.
The compressed video flowing to the home
requires high bandwidth, from 1.5 to 6 Mb/s, while the return path, used
for selection and control, requires far lower bandwidth.
The STB used for interactive TV is simila
r to that used for DBS. The demodulation function depends upon the
network used to deliver the digital data. A microprocessor with memory for limited buffering as well as an MPEG
decompression chip is needed. The video is converted to a standard TV signal
for input to the TV set. The STB has a
control unit, which allows the viewer to make choices from a distance. Some means are needed to allow the
STB to relay viewer commands back to the server, depending upon the network being used.
Cable systems a
ppear to be broadcast systems, but they can actually be used to deliver different content to each home.
Cable systems often use fiber optic cables to send the video to converters that place it on local loops of coaxial cable.
If a fiber cable is dedicated
to each final loop, which services 500 to 1500 homes, there will be enough bandwidth to
deliver an individual signal to many of those houses. The cable can also provide the reverse path to the cable head
like protocols can be used to share th
e same channel with the other STBs in the local loop. This
topology is attractive to cable companies because it uses the existing cable plant. If the appropriate amplifiers are not
present in the cable system for the back channel, a telephone modem can be
used to provide the back channel.
As mentioned above, the asymmetric data rates of ADSL are tailored for interactive TV. The use of standard twisted
pair wire, which has been brought to virtually every house, is attractive to the telephone industry. Howev
twisted pair is a more noisy medium than coaxial cable, so more expensive modems are needed, and distances are
limited. ADSL can be used at higher data rates if the distance is further reduced.
Interactive TV architectures are typically three
, in which the client and server tiers interact through an application
server. (In three
tier systems, the tier
1 systems are clients, the tier
2 systems are used for application programs, and
3 systems are data servers.) The application tier is u
sed to separate the logic of looking up material in indexes,
maintaining the shopping state of a viewer, interacting with credit card servers, and other similar functions from the
simple function of delivering multimedia objects.
The key research question
s about interactive TV and video
demand are not computer science questions at all.
Rather, they are the human
factors issues concerning ease of the on
screen interface and, more significantly, the
marketing questions regarding what home viewers will fin
d valuable and compelling.
Internet over cable systems
World Wide Web browsing allows users to see a rich text, video, sound, and graphics interface and allows them to
access other information by clicking on text or graphics. Web pages are written in HyperText Markup Language
(HTML) and use an application comm
unications protocol called HTTP. The user responses, which select the next
page or provide a small amount of text information, are normally quite short. On the other hand, the graphics and
pictures require many times the number of bytes to be transmitted t
o the client. This means that distribution systems
that offer asymmetric data rates are appropriate.
Cable TV systems can be used to provide asymmetric Internet access for home computers in ways that are very
similar to interactive TV over cable. The data
being sent to the client is digitized and broadcast over a prearranged
channel over all or part of the cable system. A
at the client end tunes to the right channel and
demodulates the information being broadcast. It must also filter the inform
ation destined for the particular station
from the information being sent to other clients. The low
bandwidth reverse channel is the same low
that is used in interactive TV. As with interactive TV, a telephone modem might
be used for the rev
erse channel. The
cable head end is then attached to the Internet using a router. The head end is also likely to offer other services that
Internet Service Providers sell, such as permanent mailboxes. This asymmetric connection would not be appropriate
a Web server or some other type of commerce server on the Internet, because servers transmit too much data for
speed return path. The cable modem provides the physical link for the TCP/IP stack in the client computer.
The client software treats th
is environment just like a LAN connected to the Internet.
Video servers on a LAN
based multimedia systems
go beyond the simple, client
server, remote file system type of video
server, to advanced systems that offer a three
tier architecture with
clients, application servers, and multimedia
servers. The application servers provide applications that interact with the client and select the video to be shown. On
a company intranet, LAN
based multimedia could be used for just
time education, on
e documentation of
procedures, or video messaging. On the Internet, it could be used for a video product manual, interactive video
product support, or Internet commerce. The application server chooses the video to be shown and causes it to be sent
to the c
There are three different ways that the application server can cause playout of the video: By giving the address of the
video server and the name of the content to the client, which would then fetch it from the video server; by
communicating with t
he video server and having it send the data to the client; and by communicating with both to set
up the relationship.
The transmission of data to the client may be in
. In push mode, the server sends data to the
client at the appropr
iate rate. The network must have quality
service guarantees to ensure that the data gets to the
client on time. In pull mode, the client requests data from the server, and thus paces the transmission.
The current protocols for Internet use are TCP and
UDP. TCP sets up sessions, and the server can push the data to the
client. However, the ``moving
window'' algorithm of TCP, which prevents client buffer overrun, creates
acknowledgments that pace the sending of data, thus making it in effect a pull protoco
l. Another issue in Internet
architecture is the role of firewalls, which are used at the gateway between an intranet and the Internet to keep
potentially dangerous or malicious Internet traffic from getting onto the intranet. UDP packets are normally neve
allowed in. TCP sessions are allowed, if they are created from the inside to the outside. A disadvantage of TCP for
isochronous data is that error detection and retransmission is automatic and required
whereas it is preferable to
discard garbled video d
ata and just continue.
Resource reservation is just beginning to be incorporated on the Internet and intranets. Video will be considered to
have higher priority, and the network will have to ensure that there is a limit to the amount of high
can be admitted. All of the routers on the path from the server to the client will have to cooperate in the reservation
and the use of priorities.
Video conferencing which will be used on both intranets and the Internet, uses
multiple data types, and serves
multiple clients in the same conference. Video cameras can be mounted near a PC display to capture the user's
picture. In addition to the live video, these systems include shared white boards and show previously prepared
suals. Some form of mediation is needed to determine which participant is in control. Since the type of multimedia
data needed for conferencing requires much lower data rates than most other types of video, low
rate video, using
approximately eight fra
mes per second and requiring tens of kilobits per second, will be used with small window
sizes for the ``talking heads'' and most of the other visuals. Scalability of a video conferencing system is important,
because if all participants send to all other p
articipants, the traffic goes up as the square of the number of participants.
can be made linear by having all transmissions go through a common server. If the network has a multicast
facility, the server can use that to distribute to the participants
MULTIMEDIA AUTHORING TOOL:
A multimedia authoring tool is a program that helps you write multimedia applications. A multimedia authoring tool
enables you to create a final application merely by linking together obje
cts, such as a paragraph of text, an
illustration, or a song. They are used exclusively for applications that present a mixture of textual, graphical, and
With multimedia authoring software you can make video productions including CDs and DVDs
, design interactivity
and user interface, animations, screen savers, games, presentations, interactive training and simulations.
Types of authoring tools:
There are basically three types of authoring tools. These are as following.
_ Microsoft Word
Multimedia Authoring Tools
In these authoring systems, elements are organized as pages of a book or stack of cards.
The authoring system lets you l
ink these pages or cards into organized sequence and they also allow you to play
sound elements and launch animations and digital videos.
based authoring systems are object
objects are the buttons, graphics and etc. Each object may conta
in a programming script activated when an
related to that object occurs.
EX: Visual Basic.
tools provide a visual programming approach to organizing and presenting multimedia.
First you build the flowchart of events, tasks and decisions by using appropriate icons from a library. These icons
can include menu choices, graphic images and sounds. W
hen the flowchart is built, you can add your content:
text, graphics, animations, sounds and video movies.
EX: Authoware Professional
based authoring tools are the most common of multimedia authoring tools. In these auth
elements are organized along a time line. They are the best to use when you have message with the beginning and
an end. Sequentially organized graphic
are played back at the speed that you can set. Other elements (such
as audio events
) are triggered at the given time or location in the sequence of events.
EX: Animation Works Interactive
help us work with the important basic elements of your project: its graphics, images, sound, text
and moving pictures.
Elemental tools includes:
Painting And Drawing Tools
Cad And 3
D Drawing Tools
Image Editing Tools
Sound Editing Programs
Tools For Creating Animations And Digital Movies
Painting And Drawing Tools:
Painting and drawing
tools are the most important items in your toolkit because the impact
of the graphics in your project will likely have
the greatest influence on the end user.
is dedicated to producing excellent bitmapped images
dedicated to producing line art that is easily printed to paper. Drawing packages include
powerful and expensive computer
aided design (
Ex: DeskDraw, DeskPaint, Designer
CAD And 3
D Drawing Tools
) is a software used by architects, engineers, drafters,
and others to create precision drawings or technical illustrations. It can be used to create two
D) drawings or three dimensional modules. T
he CAD images can spin about in space, with lighting conditions
exactly simulated and shadows properly drawn. With CAD software you can stand in front of your work and view
it from any angle, making judgments about its design
Image Editing To
Image editing applications
are specialized and powerful tools for enhancing and retouching
existing bitmapped images. These programs are also indispensable for rendering images used in multimedia
presentations. Modern versions of these programs also provide many of the features and
tools of painting and
drawing programs, and can be used to create images from scratch as well as images digitized from scanners,
digital cameras or artwork files created by painting or drawing packages.
Often you will have printed matter and other text to incorporate into your project, but no
electronic text file. With
Optical Character Recognition
) software, a flat
bed scanner and your computer you can save many hours of typing printed wo
rds and get
the job done faster and more accurately.
Sound Editing Programs
Sound editing tools for both
sound let you
music as well
it. By drawing the representation of the sound in a waveform, you can cut, copy, paste and edit segments of the sound
with great precision and making your own sound effects.
Using editing tools to make your own MIDI files requires k
nowing about keys, notations and instruments and you
will need a MIDI synthesizer or device connected to the computer.
Ex: SoundEdit Pro
Tools For Creating Animations And Digital Movies
Animations and digital movies
nces of bitmapped graphic scenes (frames), rapidly
played back. But animations can also be made within an authoring system by rapidly changing the location of
objects to generate an appearance of motion.
let you edit and assemble video c
lips captured from camera, animations, scanned images,
other digitized movie segments. The completed clip, often with added transition and visual effects can be played
Ex: Animator Pro
and SuperVideo Windows
No multimedia toolkit is complete without a few indispensable utilities to perform
some odd, but repeated tasks. These are the
For example a
is essential, because
bitmap images are so
common in multimedia, it is important to have a tool for grabbing all or part of the screen
display so you can import it into your authoring system or copy it into an image editing application
One of the most important
techniques in making graphics and text easy to read and pleasing to the eye on
aliasing is a cheaty way of getting round the low 72dpi resolution of the computer monitor and
making objects appear as smooth as if they'd just st
epped out of a 1200dpi printer (nearly).
Take a look at these images. The letter a on the left is un
aliased and looks coarse compared
with the letter on the right.
If we zoom in we can see better what's happening. Look at how the un
d example below left breaks up
curves into steps and jagged outcrops. This is what gives the letter its coarse appearance. The example on the right is
the same letter, same point size and everything, but with anti
aliasing turned on in Photoshop's text too
l. Notice how
the program has substituted shades of grey around the lines which would otherwise be broken across a pixel.
aliasing is more than just making something slightly fuzzy so that you can't see the jagged edges: it's a way
of fooling the eye into seeing straight lines and smooth curves where there are none.
To see how anti
aliasing works, let's take a diago
nal line drawn across a set of pixels. In the example left the pixels
are marked by the grid: real pixels don't look like that of course, but the principle is the same.
Pixels around an un
aliased line can only be part of the line or not part of it:
so the computer draws the line as a
jagged set of pixels roughly approximating the course of our original nice smooth line. (Trivia fact: anti
invented at MIT's Media Lab. So glad they do do something useful there....)
When the computer anti
aliases the line it works out how much of each in
between pixel would be covered by the
diagonal line and draws that pixel as an intermediate shade between background and foreground. In our simple
minded example here this is shades of grey. This close up t
aliasing is obvious and actually looks worse than
aliased version, but try taking your glasses off, stepping a few yards back from the screen and screwing up
your eyes a bit to emulate the effect of seeing the line on a VGA monitor cove
red in crud at its right size. Suddenly a
nice, smooth line pops into view.
So how does one go about anti
aliasing an image? Just be grateful you don't have to do it by hand. Most screen
design programs, including Photosop and Paintshop Pro include anti
ias options for things like text and line tools.
The important thing is simply to remember to do it, and to do it at the appropriate time.
There are far too many graphics out on the Web that are perfectly well
designed, attractive and fitted to their purpo
but end up looking amateurish because they haven't been anti
aliased. Equally, there are plenty of graphics that have
turned to visual mush because they've been overworked with the anti
Generally, I guess, the rules are these:
ias text except when the text is very small. This is to taste but I reckon on switching off anti
Photoshop below about 12 points. If you're doing a lot with text this size, you really oughtn't be putting it in a graphic
but formatting ASCII ins
alias rasterised EPSs (see the accompanying page for details). Except when you don't want to, of course.
If attempting to anti
alias something manually, or semi
manually, such as by putting a grey halo round a block black
graphic, then on
ly apply the effect at the last possible stage. And always, always, always bear in mind the target
background colour. It's a fat lot of good anti
aliasing a piece of blue text on a white background, if the target page is
orange, because the anti
lo is going to be shades of white
orange. I spent two hours re
colouring in a
after doing exactly that.
Never confuse blur and anti
aliasing. The former is a great help in making things appear nice and s
mooth if applied to
specific parts of an image, but it'll make your image just look runny if used all over.
That's about it. Anti
aliasing is of immense importance, especially in turning EPSs into something pleasant to look at
onscreen, as I explain in the
next couple of pages.
Animation is achieved by adding motion to still image or objects, It may also be defined as the creation of moving
pictures one frame at a time.Animation grabs attention, and makes a multimedia product more
There are a few types of animation
It is the simplest form of animation
Example of transitions is spiral, stretch. Zoom
Process / information transition
Animation can be used to describe complex information/
process in an easier way.
Such as performing visual cues (e.g. how things work)
Object movement which are more complex animations such as animated gif or animated scenes.
How does animation works?
Animation is possible because of:
A biological phenomenon known as
persistence of vision
An object seen by human eye remains chemically mapped on the eye’s retina for a
brief time after viewing
A psychological phenomenon called
Human’s mind need to conceptually complete the perceived
The combination of persistence of vision and phi make it possible for a series of images that are
changed very slightly and very rapidly, one after another, to seemingly blend together into a visual
illusion of movement.
Eg. A few cells or frames o
f rotating logo, when continuously and rapidly changed, the arrow of
the compass is perceived to be spinning.
Still images are flashed in sequence to provide the illusion of animation.
The speed of the image changes is called the
is typically delivered at 24 frames per second (fps)
In reality the projector light flashes twice per frame, this increasing the flicker rate to 48 times per
second to remove any flicker image.
The more interruptions per second, the more continuous the be
am of light appears, the smoother
a series of progressively different graphics are used for each frame of film.
Made famous by Disney.
dimensional sets are used (stage,
Objects are moved carefully between shots.
Computer Animation (Digital cel & sprite animation)
Employ the same logic and procedural concept of cel animation
Objects are drawn using 3D modelling software.
Objects and background are drawn on differ
ent layers, which can be put on top of one
animation on moving object (sprite)
Computer Animation (Key Frame Animation)
Key frames are drawn to provide the pose a detailed characteristic of characters at
important points in the
Eg. Specify the start and end of a walk, the top and bottom of the fall.
3D modelling and animation software will do the tweening process.
Tweening fill the gaps between the key frames and create a smooth animation.
that mix cell and 3D computer animation. It may as well include life footage.
The study of motion of jointed structure (such as people)
Realistic animation of such movement can be complex. The latest technology use motion
capture for complex m
The process of transitioning from one image to another.
When morphing, few key elements (such as a nose from both images) are set to share the
same location (one the final
VIDEO ON DEMAND
Video can add
great impact to your multimedia presentation due to its ability to draw people
Video is also very hardware
intensive (require the highest performance demand on your computer)
Storage issue: full
screen, uncompressed video uses over 20 megabytes
per second (MBps)
of bandwidth and storage space.
Processor capability in handling very huge data on real
get the highest video performance, we should:
Use video compression hardware to allow you to work with full
Use a sophisticated audio board to allow you to use CD
Install a Super fast RAID ( Redundant Array of Independent Disks) system that will
speed data transfer rates.
Analog vs Digital Video
Digital video is
beginning to replace analog in both professional (production house and broadcast
station) and consumer video markets.
Digital video offer superior quality at a given cost.
Digital video reduces generational losses suffered by analog video.
mastering means that quality will never be an issue
Obtaining Video Clip
If using analog video, we need to convert it to digital format first (in other words, need to digitize the
analog video first).
Source for analog video can come from:
beware of licensing and copyright issues
Take a new footage (i.e. shoot your own video)
Ask permission from all the persons who appear or speak, as well as the
permission for the audio or music used.
How video works. (Video Basics)
Light passes through the camera lens and is converted to an electronic signal by a Charge Coupled
grade cameras have a single CCD.
grade cameras have three CCDs, one for each Red, Green and Blue color information
e output of the CCD is processed by the camera into a signal containing three channels of color
information and synchronization pulse (sync).
If each channel of color information is transmitted as a separate signal on its own conductor, the signal
s called RGB, which is the preferred method for higher
quality and professional video work.
) is an artifact, usually two
), that has a similar
appearance to some
usually a physical object or a
is also used in the broader sense of any two
dimensional figure such as a
. In this wider sense, images can
manually, such as by
, rendered automatically by
by a combination of
methods, especially in a
is one that exists only for a short period of time. This may be a reflection of an object by a mirror, a
, or a scene displayed on a
cathode ray tube
. A fixed image, also called a
one that has been recorded on a material object, such as
or digital processes.
exists in an individual's min
d: something one remembers or imagines. The subject of an image need
not be real; it may be an abstract concept, such as a
, function, or "imaginary" entity. For example,
claimed to have dreamt purely in aural
images of dialogues. The development of synthetic acoustic
technologies and the creation of
have led to a consideration of the possibilities of a
made up of
ucible phonic substance beyond linguistic or musicological analysis.
A still image is a single
image, as distinguished from a moving image (see below). This phr
ase is used in
to emphasize that one is not talking about movies, or in very
precise or pedantic technical writing such as a
is a photograph taken on the set of a movie or television program during production, used for promotional
CAPTURING AND EDITING IMAGE:
The image you see on your monitor is a bitmap stored in a video memory, updated about every
1/60 of a second or faster, depending upon your monitor’s scan rate. As you assemble image for your multimedia
project, you may n
eed to capture and store an image directly from your screen. The simplest way to capture what you
see on your screen is to press any proper key on your keyboard.
Both the Macintosh and window environment have a
where text and images are temporar
stored when you cut copy them within an application. In window when you press print screen a copy of
your screen image go to clipboard. From clipboard you can paste the captured bitmap image to an
utility for Macintosh and
window go to a step further and are indispensable to the
With a key stroke they let you select an area of screen and save
the selection in various
When manipulating bitmap is to use an image
gram. There is king of Mountain program
that let you not only retouch the image but also do tricks like placing your face at the helm of a square
In addition to letting you to enhance and make composite image, image editing will also allow
you to alter and distort the image.
a colored image of red rose can be changed into blue rose or purple
rose. Morphing is the other effect tha
t can be used to manipulate the still images or can be used to create
bizarre animated transformations. Morphing allow you to smoothly blend two image so that one image is
seemed to be melted into the other image.
is the action or process of converting text and
or other files to
. This "analog" to "digital"
conversion process (A<D) is required for computer users to be able to view electronic files.
is either a given, finite set of
for the management of
(that is, a
), or a small on
screen graphical element for choosing from a limited set of choices,
not necessarily colors (such as a
Depending on the context (an engineer's technical specification, an advertisement, a programmers' guide,
an image file specification, a user's manual, etc.) the term
and related terms such as
, for example,
can have somewhat different meanings.
The following are some of the widely used meanings for
The total number of colors that a given system is able to generate or manage; the term
encountered in this sense.
displays are said to have a
bit RGB palette
The limited fixed color selection that a given display adapter can offer when its
appropriately set (
on). For example, the
Color Graphics Adapter
(CGA) can be set to show
called palette #1 or the palette #2 in color graphic mode: two combinations of
4 fixed colors
The limited selection of colors that a given system is able to display simultaneously, generally picked from a
are also used. In this case, the color selection is always
, both by the user or by a program. For example, the standard
display adapter is said
to provide a palette
256 simultaneous colors
from a total of 262,144 different colors.
of the display subsystem in which the selected colors' values are to be loaded in order
to show them, also referred as the
(CLUT). For example, the
hardware registers of the
are known both as their color palette and their CLUT,
depending on sources.
A given color selection officially standardized by some body or corporation;
are also used for this meaning. For example, the well known
r use with Internet
, or the
The limited color selection inside a given
, for example, although the
are also generally used.
A vector image is made up of a series of mathematical instructions. These instructions define the lines
s that make up a vector image. As well as shape, size and orientation, the file stores information about the
outline and fill colour of these shapes.
Features common to vector images
images display at any size without
loss of detail
File sizes are usually smaller than raster images