UNIT I: Introduction and Web Development Strategies History of Web

materialisticrampantInternet et le développement Web

10 nov. 2013 (il y a 7 années et 11 mois)

312 vue(s)

UNIT I: Introduction and Web Development Strategies

History of Web

Some thirty years ago, the RAND Corporation, America's foremost Cold War think
tank, faced a strange
strategic problem. How could the US authorities successfully communicate after a nuclea
r war?
Postnuclear America would need a command
control network, linked from city to city, state to state,
base to base. But no matter how thoroughly that network was armored or protected, its switches and
wiring would always be vulnerable to the impac
t of atomic bombs. A nuclear attack would reduce any
conceivable network to tatters.

And how would the network itself be commanded and controlled? Any central authority, any network
central citadel, would be an obvious and immediate target for an enemy mi
ssile. The center of the
network would be the very first place to go. RAND mulled over this grim puzzle in deep military secrecy,
and arrived at a daring solution. The RAND proposal (the brainchild of RAND staffer Paul Baran) was
made public in 1964.

In t
he first place, the network would *have no central authority.*Furthermore, it would be *designed from
the beginning to operate while in tatters.*The principles were simple. The network itself would be
assumed to be unreliable at all times. All the nodes in

the network would be equal in status to all other
nodes, each node with its own authority to originate, pass, and receive messages. The messages
themselves would be divided into packets, each packet separately addressed. Each packet would begin at
some sp
ecified source node, and end at some other specified destination node. Each packet would wind its
way through the network on an individual basis. The particular route that the packet took would be
unimportant. Only final results would count. Basically, the

packet would be tossed like a hot potato from
node to node to node, more or less in the direction of its destination, until it ended up in the proper place.
If big pieces of the network had been blown away, that simply wouldn't matter; the packets would s
stay airborne, lateralled wildly across the field by whatever nodes happened to survive. This rather
haphazard delivery system might be "inefficient" in the usual sense (especially compared to, say, the
telephone system)

but it would be extremely r

During the 60s, this intriguing concept of a decentralized, blastproof, packet
switching network was
kicked around by RAND, MIT and UCLA. The National Physical Laboratory in Great Britain set up the
first test network on these principles in 1968. S
hortly afterward, the Pentagon's Advanced Research
Projects Agency decided to fund a larger, more ambitious project in the USA.

The nodes of the network were to be high
speed supercomputers. These were rare and valuable machines
which were in real need of

good solid networking, for the sake of national research
projects. In autumn 1969, the first such node was installed in UCLA. By December 1969, there were four
nodes on the infant network, which was named ARPANET, after its Pentagon sponso

[In 1957, while responding to the threat of the Soviets in general and the success of Sputnik (the first
made satellite, launched by the USSR and terrifying to the Americans as a symbol of technological
power) in particular, President Dwight Eisenho
wer created both the Interstate Highway System and the
Advanced Research Projects Agency, or ARPA.]

The four computers could transfer data on dedicated high
speed transmission lines. They could even be
programmed remotely from the other nodes. Thanks to AR
PANET, scientists and researchers could share
one another's computer facilities by long
distance. This was a very handy service, for computer
time was
precious in the early '70s.

In 1971 there were fifteen nodes in ARPANET; by 1972, thirty
seven nodes.

1971 the Internet (ARPANET) looked like this:And it was good. By the second year of operation,
however, an odd fact became clear. ARPANET's users had warped the computer
sharing network into a
dedicated, high
speed, federally subsidized electronic post

fice. The main traffic on ARPANET was
not long
distance computing. Instead, it was news and personal messages. Researchers were using
ARPANET to collaborate on projects, to trade notes on work, and eventually, to downright gossip and
schmooze. People had t
heir own personal user accounts on the ARPANET computers, and their own
personal addresses for electronic mail. Not only were they using ARPANET for person
communication, but they were very enthusiastic about this particular service

far more e
nthusiastic than
they were about long
distance computation. It wasn't long before the invention of the mailing
list, an
ARPANET broadcasting technique in which an identical message could be sent automatically to large
numbers of network subscribers.


was invented by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available
symbols on his teletype to link the username and address.

The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments
) in 1972. RFC's are a means of sharing developmental work throughout community.

The FTP protocol
, enabling file transfers between Internet nodes, was published as an RFC in 1973.

The Unix to Unix Copy Protocol (UUCP)

was invented in 1978 at Bell Labs. Use
net was started in
1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing
a means of exchanging information throughout the world . While Usenet is not considered as part of the
Internet, since it does not share

the use of TCP/IP, it linked unix systems around the world, and many
Internet sites took advantage of the availability of newsgroups. It was a significant part of the community
building that took place on the networks.

Similarly, BITNET (Because It's Time

Network) connected IBM mainframes around the educational
community and the world to provide mail services beginning in 1981. Listserv software was developed
for this network and later others. Gateways were developed to connect BITNET with the Internet and

allowed exchange of e
mail, particularly for e
mail discussion lists. These listservs and other forms of e
mail discussion lists formed another major element in the community building that was taking place.


While the number of sites on the Intern
et was small, it was fairly easy to keep track of the
resources of interest that were available. But as more and more universities and organizations
and their

connected, the Internet became harder and harder to track. There was more and more n
eed for
tools to index the resources that were available. The first effort, other than library catalogs, to index the
Internet was created in 1989, as Peter Deutsch and his crew at McGill University in Montreal, created an
archiver for ftp sites, which the
y named Archie. This software would periodically reach out to all known
openly available ftp sites, list their files, and build a searchable index of the software.

Throughout the '70s, ARPA's network grew. Its decentralized structure made expansion easy. U
standard corporate computer networks, the ARPA network could accommodate many different kinds of
machine. As long as individual machines could speak the packet
switching lingua franca [common
language] of the new, anarchic network, their brand

and their content, and even their ownership,
were irrelevant.

The ARPA's original standard for communication was known as NCP, "Network Control Protocol," but
as time passed and the technique advanced, NCP was superceded by a higher
level, more sophistica
standard known as TCP/IP (first proposed by Bob Kahn at BBN). TCP, or "Transmission Control
Protocol," converts messages into streams of packets at the source, then reassembles them back into
messages at the destination. IP, or "Internet Protocol," han
dles the addressing, seeing to it that packets are
routed across multiple nodes and even across multiple networks with multiple standards

not only
ARPA's pioneering NCP standard, but others like Ethernet.

As early as 1977, TCP/IP was being used by other

networks to link to ARPANET. ARPANET itself
remained fairly tightly controlled, at least until 1983, when its military segment broke off and became
MILNET. But TCP/IP linked them all. And ARPANET itself, though it was growing, became a smaller
and smaller

neighborhood amid the vastly growing galaxy of other linked machines.

As the '70s and '80s advanced, many very different social groups found themselves in possession of
powerful computers. It was fairly easy to link these computers to the growing network
networks. As
the use of TCP/IP became more common, entire other networks fell into the digital embrace of the
Internet, and messily adhered.

Since the software called TCP/IP was public
domain, and the basic technology was decentralized and
rather anarch
ic by its very nature, it was difficult to stop people from barging in and linking up
other. In point of fact, nobody *wanted* to stop them from joining this branching complex
of networks, which came to be known as the "Internet." Connecting t
o the Internet cost the taxpayer little
or nothing, since each node was independent, and had to handle its own financing and its own technical
requirements. The more, the merrier. Like the phone network, the computer network became steadily
more valuable a
s it embraced larger and larger territories of people and resources.

A fax machine is only valuable if *everybody else* has a fax machine. Until they do, a fax machine is just
a curiosity. ARPANET, too, was a curiosity for a while. Then computer

became an utter
necessity. In 1984 the National Science Foundation got into the act, through its Office of Advanced
Scientific Computing. The new NSFNET set a blistering pace for technical advancement, linking newer,
faster, shinier supercomputers, throug
h thicker, faster links, upgraded and expanded, again and again, in
1986, 1988, 1990. And other government agencies leapt in: NASA, the National Institutes of Health, the
Department of Energy, each of them maintaining a digital satrapy [the territory under

the control of a
subordinate ruler] in the Internet confederation.

The nodes in this growing network
networks were divvied up into basic varieties. Foreign computers,
and a few American ones, chose to be denoted by their geographical locations. The oth
ers were grouped
by the six basic Internet "domains": gov, mil, edu, com, org and net. Gov, Mil, and Edu denoted
governmental, military and educational institutions, which were, of course, the pioneers, since ARPANET
had begun as a high
tech research exerc
ise in national security. Com, however, stood for "commercial"
institutions, which were soon bursting into the network like rodeo bulls, surrounded by a dust
cloud of
eager nonprofit "orgs." (The "net" computers served as gateways between networks.)

T itself formally expired in 1989, a happy victim of its own overwhelming success. Its users
scarcely noticed, for ARPANET's functions not only continued but steadily improved. The use of TCP/IP
standards for computer networking is now global. In 1971, the
re were only four nodes in the ARPANET
network. Today there are tens of thousands of nodes in the Internet, scattered over forty
two countries,
with more coming on
line every day. Millions of people use this gigantic mother
networks. The In
ternet is especially popular among scientists, and is probably the most important
scientific instrument of the late twentieth century. The powerful, sophisticated access that it provides to
specialized data and personal communication has sped up the pace o
f scientific research enormously.

Since the Internet was initially funded by the government, it was originally limited to research, education,
and government uses. Commercial uses were prohibited unless they directly served the goals of research
and educat
ion. This policy continued until the early 90's, when independent commercial networks began
to grow. It then became possible to route traffic across the country from one commercial site to another
without passing through the government funded NSFNet Intern
et backbone.

Delphi was the first national commercial online service to offer Internet access to its subscribers. It
opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of
limitations on commercial use disap
peared in May 1995 when the National Science Foundation ended its
sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and
CompuServe came online. Since commercial usage was so widespread by this time and educa
institutions had been paying their own way for some time, the loss of NSF funding had no appreciable
effect on costs.

The Internet's pace of growth in the early 1990s is spectacular, almost ferocious. It is spreading faster than
cellular phones, fas
ter than fax machines. Last year the Internet was growing at a rate of twenty percent a
*month.* The number of "host" machines with direct connection to TCP/IP has been doubling every year
since 1988. The Internet is moving out of its original base in mili
tary and research institutions, into
elementary and high schools, as well as into public libraries and the commercial sector.

Gopher: In 1991, the first really friendly interface to the Internet was developed at the University of
Minnesota. The University
wanted to develop a simple menu system to access files and information on
campus through their local network. The demonstration system was called a gopher after the U of
Minnesota mascot
the golden gopher. The gopher proved to be very prolific, and within

a few years there
were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to
use. In a gopher system, you type or click on a number to select the menu selection you want.

Why do people want to be "on the Internet
?" One of the main reasons is simple freedom. The Internet is a
rare example of a true, modern, functional anarchy. There is no "Internet Inc." There are no official
censors, no bosses, no board of directors, no stockholders. In principle, any node can spe
ak as a peer to
any other node, as long as it obeys the rules of the TCP/IP protocols, which are strictly technical, not
social or political.

The headless, anarchic, million
limbed Internet is spreading like bread
mold. Any computer of sufficient
power is
a potential spore for the Internet, and today such computers sell for less than $2,000 and are in
the hands of people all over the world. ARPA's network, designed to assure control of a ravaged society
after a nuclear holocaust, has been superceded by its
mutant child the Internet, which is thoroughly out of
control, and spreading exponentially through the post
Cold War electronic global village. The spread of
the Internet in the 90s resembles the spread of personal computing in the 1970s, though it is even

and perhaps more important. More important, perhaps, because it may give those personal computers a
means of cheap, easy storage and access that is truly planetary in scale.

And this was written before the WWW was properly developed!WWW: In 1989 an
other significant event
took place in making the nets easier to use. Tim Berners
Lee and others at the European Laboratory for
Particle Physics (CERN), proposed a new protocol for information distribution. This protocol, which
became the World Wide Web in
1991, was based on hypertext
a system of embedding links in text to
link to other text, which you have been using every time you selected a text link while reading these
pages. Although started before gopher, it was slower to develop.

The development in 1
993 of the graphical browser Mosaic by Marc Andreessen and his team at the
National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later,
Andreessen moved to become the brains behind Netscape Corp., which produced the most s
graphical type of browser and server until Microsoft declared war and developed its Microsoft Internet

Michael Dertouzos of MIT's Laboratory for Computer Sciences persuaded Tim Berners
Lee and others to
form the World Wide Web Consortiu
m (W3C) in 1994 to promote and develop standards for the Web.
Proprietary plug
ins still abound for the web, but the Consortium has ensured that there are common
standards present in every browser.

The World Wide Web was named by Tim Berners
Lee, who creat
ed the first web browser and invented
HTTP. As he was thinking of a name, he says: "Alternatives I considered were "Mine of information"
("Moi", c'est un peu egoiste) and "The Information Mine ("Tim", even more egocentric!), and
"Information Mesh"

For many

people today, the World Wide Web is the Internet.

Microsoft's full scale entry into the browser, server, and Internet Service Provider market completed the
major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the
Microsoft browser well integrated into the desktop shows Bill Gates' determination to capitalize on the
enormous growth of the Internet. Microsoft's success over the past few years has brought court challenges
to their dominance. We'll leave it up to you w
hether you think these battles should be played out in the
courts or the marketplace.

A current trend with major implications for the future is the growth of high speed connections. 56K
modems and the providers who support them are spreading widely, but th
is is just a small step compared
to what will follow. 56K is not fast enough to carry multimedia, such as sound and video except in low
quality. But new technologies many times faster, such as cablemodems, digital subscriber lines (DSL ),
and satellite bro
adcast are available in limited locations now, and will become widely available in the
next few years. These technologies present problems, not just in the user's connection, but in maintaining
high speed data flow reliably from source to the user. Those p
roblems are being worked on, too.

During this period of enormous growth, businesses entering the Internet arena scrambled to find
economic models that work. Free services supported by advertising shifted some of the direct costs away
from the consumer
porarily. Services such as Delphi offered free web pages, chat rooms, and
message boards for community building. Online sales have grown rapidly for such products as books and
music CDs and computers, but the profit margins are slim when price comparisons
are so easy, and public
trust in online security is still shaky. Business models that have worked well are portal sites, that try to
provide everything for everybody, and live auctions. AOL's acquisition of Time
Warner was the largest
merger in history whe
n it took place and shows the enormous growth of Internet business! The stock
market has had a rocky ride, swooping up and down as the new technology companies, the dot.com's
encountered good news and bad. The decline in advertising income spelled doom for

many dot.coms, and
a major shakeout and search for better business models is underway by the survivors.

It is becoming more and more clear that many free services will not survive. While many users still
expect a free ride, there are fewer and fewer provi
ders who can find a way to provide it. The value of the
Internet and the Web is undeniable but there is a lot of shaking out to do and management of costs and
expectations before it can regain its rapid growth. May you live in interesting times! (ostensibl
y an
ancient Chinese curse)

The Domain Name System (DNS) makes the Web easy to navigate by translating long Internet protocol
(IP) numbers into memorable Web and e
mail addresses. It relies on a hierarchy of physical root servers
to inform computers connec
ted to the Internet where they need to look to find specific locations online.

History of World Wide Web

In March 1989, Berners Lee wrote a proposal

that reference
, a database and software
project he had built in 1980, and described a more elaborate information management system. With help
Robert Cailliau
, he published a more formal proposal (on November 12, 1990) to build a "

project" called "WorldWideWeb" (one word, also "W3"

as a "web of nodes" with "hypertext
documents" to store data. That data would be viewed in "hypertext pages" (webpages) by various
" (line
mode or full
screen) on the computer network, using an "access protocol" connecting the
"Internet and

protocol wor

The proposal had been modeled after EBT's (Electronic Book Technology, a spin
off from the Institute
for Research in Information and Scholarship at B
rown University)


reader that CERN
European Organization for Nuclear Research )
had licensed. The

system, although
technically advanced (a key player in the extension of SGML ISO 8879:1986 to Hypermedia within
), was con
sidered too expensive and with an inappropriate licensing policy for general HEP (High
Energy Physics) community use: a fee for each document and each time a document was changed.

NeXT Computer

was used by Berners
Lee as the world's first
Web server

and also to write the first
Web br
, in 1990. By Christmas 1990, Berners
Lee had built all the tools necessary
for a working Web:

first Web browser

(which was a Web editor as well), the first Web server, and
the first Web pages

which described the project itself.

On August 6, 1991, he posted a short summary of the World Wide Web project on the


This date also marked the debut of the Web as a publicly available service on the Internet.

The first server outside Europe was set up at

in December 1991.

The crucial underlying concept of

originated with older projects from the 1960s, such as the
Hypertext Editing System (HES) at Brown University

among others
Ted Nelson

Andries van Dam

Ted Nelson
Project Xanadu

Douglas Engelbart
Line System

(NLS). Both Nelson and
Engelbart we
re in turn inspired by
Vannevar Bush
based "
," which was described in
the 1945 essay "
As We May Think

Lee's breakthrough was to marry hypertext to the Internet. In his book
Weaving The Web

explains that he had repeatedly suggested that a marriage between the two technologies was possible to
members of

technical communities, but when no one took up his invitation, he finally tackled the
project himself. In the process, he developed a

system of globally unique identifiers for resources on the
Web and elsewhere: the
Uniform Resource Identifier

The World Wide Web had a number of di
fferences from other hypertext systems that were then available.
The Web required only unidirectional links rather than bidirectional ones. This made it possible for
someone to link to another resource without action by the owner of that resource. It also
reduced the difficulty of implementing Web servers and browsers (in comparison to earlier systems), but
in turn presented the chronic problem of
link rot
. Unlike predecessor
s such as
, the World
Wide Web was non
proprietary, making it possible to develop servers and clients independently and to
add extensions without licensing restrictions.

On Apr
il 30, 1993,


that the World Wide Web would be free to anyone, with no fees
due. Coming

two months after the announcement that the

protocol was no longer free to use, this
produced a rapid shift away from Gopher and towards the Web. An early popular

Web browser was
, which was based upon

Scholars generally agree that a turning point for the Wor
ld Wide Web began with the introduction


Web browser

in 1993, a graphical browser developed by a team at the
National Center for
Supercomputing Applications

at the
University of Illinois
at Urbana

UIUC), led by
Marc Andreessen
. Funding for Mosaic came from the U.S.
Performance Computing and
Communications Initiative
, a funding program i
nitiated by the
High Performance Computing and
Communication Act of 1991
, one of
several computing developments

initiated by U.S. Senator

Prior to the release of Mosaic, graphics were not commonly mixed with text in Web pages, and
its popularity was less than older protocols in use over the Internet, such as

Wide Area
Information Servers

(WAIS). Mosaic's
graphical user interface allowed the Web to become, by far, the
most popular Internet protocol.

The World Wide Web Consortium (W3C) was founded by Tim Berners
Lee after he left the European
Organization for Nuclear Research (
) in October, 1994. It was founded at the
Institute of Technology

Laboratory for
Computer Science (MIT/LCS) with support from the
Advanced Research Projects Agency


which had pioneered th

and the

By the end of 1994, while the total number of websites w
as still minute compared to present standards,
quite a number of
notable websites

were already active, many of whom are the precurs
ors or inspiration
for today's most popular services.

Protocols governing Web

The Dynamic Host Configuration

Protocol (DHCP) provides Internet hosts with configuration
parameters. DHCP is an extension of BOOTP. DHCP consists of two components: a protocol for
delivering host
specific configuration parameters from a DHCP server to a host and a mechanism for
tion of network addresses to hosts.


The Internet Control Message Prot
ocol (ICMP) was revised during
the definition of IPv6. In addition, the multicast control functions of the IPv4 Group Membership
Protocol (IGMP) are now incorporated with the ICMPv6.

The structure of the ICMPv6 header is shown in the following illustration

The Internet Protocol (IP), defined by IETF RFC791, is the routing layer datagram service of the TCP/IP
suite. All other protocols within the TCP/IP suite, except A
RP and RARP, use IP to route frames from
host to host. The IP frame header contains routing information and control information associated with
datagram delivery.

Transport Layer


nsmission Control Protocol

The Defense Advance Research Projects Agency (DARPA) originally developed Transmission Control
Protocol/Internet Protocol (TCP/IP) to interconnect various defense department com
puter networks. The
Internet, an international Wide Area Network, uses TCP/IP to connect government and educational
institutions across the world. TCP/IP is also in widespread use on commercial and private networks. The
TCP/IP suite includes the following

Data Link Layer

: Address Resolution Protocol/Reverse Address : TCP/IP uses the Address Resolution
Protocol (ARP) and the Reverse Address Resolution Protocol (RARP) to

initialize the use of Internet
addressing on an Ethernet or other network that uses its own media access control (MAC). ARP allows a
host to communicate with other hosts when only the Internet address of its neighbors is known. Before
using IP, the host s
ends a broadcast ARP request containing the Internet address of the desired destination

Network Layer


Dynamic Host Configuration Protocol

Internet Control Message Protocol


Internet Group Management Protocol

The Internet Group Management Protocol (IGMP) is used by IP hosts to report their host group
memberships to any immediately neighboring multicas
t routers. IGMP is a integral part of IP. It must be
implemented by all hosts conforming to level 2 of the IP multicasting specification. IGMP messages are
encapsulated in IP datagrams, with an IP protocol number of 2.Version 3 of IGMP adds support for
rce filtering. This indicates the ability for a system to report interest in receiving packets;only from
specific source addresses, or from all but specific source addresses, sent to a particular multicast address.

Internet Protocol version 4



Internet Protocol version 6

IETF RFC793 defines the Transmission Control Protocol (TCP). TCP provides a reliable stream
delivery and virtual connection service to applications through the use of sequenced acknowledgment
with retransmission of packets when

The User Datagram Protocol (UDP), defined by IETF RFC768, provides a simple, but unreliable message
service for transaction
oriented services. Each

UDP header carries both a source port identifier and
destination port identifier, allowing high
level protocols to target specific applications and services
among hosts.

The Domain Name Service (DNS) protocol searches for resources using a database distributed among
different name servers.

Protocol elements are carried directly over TCP or any other transport layer protocol.

Protocol data elements are encoded in ordinary strings.

ght BER encoding is used to encode all protocol elements.

LDAP works by a client transmitting a request to a server. In the request the client specifies the operation
to be performed. The server must then perform the required operation on the directory. A
fter this, the
server returns a response containing the results, or any errors.

Application Layer

The File Transfer Protocol (FTP) provides the basic elements o
f file sharing between hosts. FTP uses
TCP to create a virtual connection for control information and then creates a separate TCP connection for
data transfers. The control connection uses an image of the TELNET protocol to exchange commands and
messages b
etween hosts.


User Datagram Protocol

Session Layer


Border Gateway Multicast Protocol

The Border Gateway Multicast Protocol (BGMP) maintains a group
prefix state in response to messages
from BGMP peers and notifications from M
IGP components. Group
shared trees are rooted at the
domain advertising the

group prefixes covering those groups. When a receiver joins a specific group
address, the border router towards the root domain generates a group
specific Join message, which is then
forwarded Border
Router towards the root domain. BGMP J
oin and Prune messages
are sent over TCP connections between BGMP peers, and the BGMP protocol state is refreshed by KEEP
ALIVE messages periodically sent over TCP.

BGMP routers build group
specific bidirectional forwarding state as they process the BGMP J
messages. Bidirectional forwarding state means that packets received from any target are forwarded to all
other targets in the target list without any RPF checks. No group
specific state or traffic exists in parts of
the network where there are no memb
ers of that group.


Domain Name Service


Lightweight Directory Access Protocol

Main features:


File Transfer Protocol


FTP control frames are TELNET exchanges and can contain TELNET commands and option negotiation.
However, most FTP control frames are simple ASCII text and can be classified as FTP commands or FTP
messages. The standard FTP commands
are as follows:

The Internet Message Access Protocol, Version 4rev1 (IMAP4) allows a client to access and manipulate
electronic mail messages on a server. IMAP4 permits
manipulation of remote message folders, called
mailboxes, in a way that is functionally equivalent to local mailboxes. IMAP4 also provides the capability
for an offline client to resynchronize with the server.

IMAP4 includes operations for creating, deleti
ng, and renaming mailboxes; checking for new messages;
permanently removing messages; setting and clearing flags; parsing; searching; and selective fetching of
message attributes, texts, and portions thereof. Messages in IMAP4 are accessed by the use of nu
These numbers are either message sequence numbers or unique identifiers.




Abort data connection process.

ACCT <account>

Account for system privileges.

ALLO <bytes>

Allocate bytes for file storage on server.

APPE <filename>

Append file to file of same name on server.

CDUP <dir pa

Change to parent directory on server.

CWD <dir path>

Change working directory on server.


Hypertext Transfer Protocol

The Hypertext Transfer Protocol (HTTP) is an application
level protocol with the lightness and speed
necessary for distributed, collaborative, hypermedia information systems. Messages are passed in a
format similar to that used by Internet Mail and the Multipurpose Internet Mail Extensions (MIME).


Internet Message Access Protocol rev 4

The Post Office Protocol version 3 (POP3) is intended to permit a workstation to dynamically access a
maildrop on a server h
ost. It is usually used to allow a workstation to retrieve mail that the server is
holding for it. POP3 transmissions appear as data messages between stations. The messages are either
command or reply messages.

IETF RFC821 defines the Simple Mail Transfer Protocol (SMTP) which is a mail service modeled on the
FTP file transfer service. SMTP transfers mail messages between systems and provides notificatio
regarding incoming mail.


IETF RFCs 1155, 1156, and 1157 define the Simple Network Management Protocol (SNMP). The
Internet community de
veloped SNMP to allow diverse network objects to participate in a global network
management architecture. Network managing systems can poll network entities implementing SNMP for
information relevant to a particular network management implementation. Netwo
rk management systems
learn of problems by receiving traps or change notices from network devices implementing SNMP.

TELNET is the terminal em
ulation protocol of TCP/IP. Modern TELNET is a versatile terminal
emulation due to the many options that have evolved over the past twenty years. Options give TELNET
the ability to transfer binary data, support byte macros, emulate graphics terminals, and
information to support centralized terminal management.


∙Internet Relay Chat Protocol

The IRC (Internet Relay Chat protocol) supports a worldwide network of se
rvers and clients, and is
stringing to cope with growth. It is a text
based protocol, with the simplest client being any socket
program capable of connecting to the server.

The IRC protocol was developed on systems using the TCP/IP network protocol, altho
ugh there is no
requirement that this remain the only sphere in which it operates. It is a teleconferencing system, which
(through the use of the client
server model) is well
suited to running on many machines in a distributed
fashion. A typical setup invo
lves a single process (the server) forming a central point for clients (or other
servers) to connect to, performing the required message delivery/multiplexing and other functions.

Servers and clients send each other messages which may or may not generate
a reply. If the message
contains a valid command, the client should expect a reply as specified but it is not advised to wait
forever for the reply; client to server and server to server communication is essentially asynchronous in




Simple Mail Transfer Protocol


Simple Network Management Protocol


TCP/IP Terminal Emulation Protocol

TELNET uses the TCP transport protocol to achieve a virtual connection between server and client. After
connecting, TELNET server and client enter a phase of option negotiation that determines
the options that
each side can support for the connection. Each connected system can negotiate new options or renegotiate
old options at any time. In general, each end of the TELNET connection attempts to implement all
options that maximize performance for

the systems involved.

In a typical implementation, the TELNET client sends single keystrokes, while the TELNET server can
send one or more lines of characters in response. Where the Echo option is in use, the TELNET server
echoes all keystrokes back to th
e TELNET client.

The Trivial File Transfer Protocol (TFTP) uses UDP. TFTP supports file writing and reading; it does not
support directory service of u
ser authorization.


The following are TFTP commands:


Trivial File Transfer Protocol



Read Request

Request to read a file.

Write Request

Request to write to a file.

File Data

Transfer of file data.

Data Acknowledge

Acknowledgement of file data.


Error indication.

The TCP/IP suite is illustrated here in relation to the OSI model:

Creating Websites for individual and Corporate Wor

A blog (a contraction of the term "weblog")[1] is a type of website, usually maintained by an individual
with regular entries of commentary, descriptions of events, or other material such as graphics or video.
Entries are commonly displayed in reverse
chronological order. "Blog" can also be used as a verb,
meaning to maintain or add content to a blog.

Many blogs provide commentary or news on a particular subject; others function as more personal online
diaries. A typical blog combines text, images, and

links to other blogs, Web pages, and other media
related to its topic. The ability for readers to leave comments in an interactive format is an important part
of many blogs. Most blogs are primarily textual, although some focus on art (artlog), photograph
(photoblog), sketches (sketchblog), videos (vlog), music (MP3 blog), and audio (podcasting). Micro
blogging is another type of blogging, featuring very short posts.

TypesThere are many different types of blogs, differing not only in the type of content,
but also in the
way that content is delivered or written.

Personal blogs

The personal blog, an ongoing diary or commentary by an individual, is the traditional, most common
blog. Personal bloggers usually take pride in their blog posts, even if their blo
g is never read by anyone
but them. Blogs often become more than a way to just communicate; they become a way to reflect on life
or works of art. Blogging can have a sentimental quality. Few personal blogs rise to fame and the
mainstream, but some personal

blogs quickly garner an extensive following. A type of personal blog is
referred to as "microblogging," which is extremely detailed blogging as it seeks to capture a moment in
time. Sites, such as Twitter, allow bloggers to share thoughts and feelings ins
tantaneously with friends
and family and is much faster than e
mailing or writing. This form of social media lends to an online
generation already too busy to keep in touch

Corporate blogs

A blog can be private, as in most cases, or it can be for busines
s purposes. Blogs, either used internally to
enhance the communication and culture in a corporation or externally for marketing, branding or public
relations purposes are called corporate blogs.

By genre

Some blogs focus on a particular subject, such as po
litical blogs, travel blogs, house blogs,[4][5] fashion
blogs, project blogs, education blogs, niche blogs, classical music blogs, quizzing blogs and legal blogs
(often referred to as a blawgs) or dreamlogs. Two common types of genre blogs are art blogs an
d music
blogs. A blog featuring discussions especially about home and family is not uncommonly called a mom
blog.[6][7][8][9][10] While not a legitimate type of blog, one used for the sole purpose of spamming is
known as a Splog.

By media type

A blog com
prising videos is called a vlog, one comprising links is called a linklog, a site containing a
portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog.[11] Blogs
with shorter posts and mixed media types are called tumble
logs. Blogs that are written on typewriters and
then scanned are called typecast or typecast blogs; see typecasting (blogging).

A rare type of blog hosted on the Gopher Protocol is known as a Phlog.

By device

Blogs can also be defined by which type of de
vice is used to compose it. A blog written by a mobile
device like a mobile phone or PDA could be called a moblog.[12] One early blog was Wearable Wireless
Webcam, an online shared diary of a person's personal life combining text, video, and pictures trans
live from a wearable computer and EyeTap device to a web site. This practice of semi
blogging with live video together with text was referred to as sousveillance. Such journals have been used
as evidence in legal matters.[citation needed]



Electronic Commerce Taxation

Domain Names Disputes Internet Policies

Cyber Torts Encryption / Cryptography

Online B
anking Cyberlaw Precautions

Privacy Privilege & Confidentiality

Community Standards Certification



mail Policies Cyberliberties

Online Education Cyber Fraud

Cyberstalking Cyberterrorism

Cyberwar Cyberhate & Cyberthreat

Consumer Fraud & Consumer Prot
ection Legislation

Legal Audit of Websites Electronic Contracts

Freedom of Expression Cybercrimes

Economic Espionage Searches & Seizures

Authentication Digital Signatures



Web Linking & Framing Meta Tags

Web Site Development

Informational Privacy

Database Protection Notaries

ISP Liability Internet Regulation

Online Media Regulation Net censorship

Parental Empowerment Advertisement / Publicity on Internet

Software Lic
encing Computer Network Service

Information Network Policy Internet Gambling

Cyberlaundering Cyberpublishing & Cybercasting

Cyber Rights

Information Technology Law

Cash & Electronic Payment Systems Regulating Online Advertising, Sales, Licenses & Auction

Securities Trading & Insurance Brokers Hacking

Telemedicine Intellectual Property Rights

Copyright, Trademarks, Patents, Cybermarks,

Trade Secrets

Web Application

web application

is an

that is accessed over a network such as the

or an
. The term
may also mean a computer software application that is hosted in a browser
controlled environment (e.g. a
citation needed

or coded in a browser
supported language (such as
, combined with a browser
markup language


and reliant on a common web browser to render the application

Web applications are popular due to the ubiquity of web browsers, and the convenience of using a web browser
as a
, sometimes called a
thin client
. The ability to update and maintain web applicatio
ns without distributing and
installing software on potentially thousands of client computers is a key reason for their popularity, as is the inherent
support for cross
platform compatibility. Common web applications include
, online
retail sales

and many other functions.


In earlier types of


computing, each application had its own client program
which served as its

and had to be separately installed on each user's
al computer
. An upgrade to the server part of the
application would typically require an upgrade to the clients installed on each user workstation, adding to the

cost and decreasing

In contrast, web applications use
web documents

written in a standard

format such as

(and more recently
), which are supported by a variety of web browsers.

Generally, each individual web page
is delivered to the client as a static document, but the sequence of pages can
provide an interactive experience, as user input is returned through web

elements embedded in the pa
markup. During the session, the web browser interprets and displays the pages, and acts as the

client for
any web application.

In 1995, Netscape introduced a
side scripting

language called
, which allowed programmers to add
some dynamic elements to the user interface that ran on the client side. Until then, all the

data had to be sent to the
server for processing, and the results were delivered through static HTML pages sent back to the client.

In 1996, Macromedia introduced
, a
vector animation

player that could be added to browsers as a

embed animations on t
he web pages. It allowed the use of a scripting language to program interactions on the client
side with no need to communicate with the server.

In 1999, the "web application" concept was introduced in the Java language in the Servlet Specification version


At that time both


had already been developed, but

had still not yet been coined
and t

object had only been recently introduced on Internet Explorer 5 as an ActiveX object.

In 2005, the term

was coined, and applications like

started to make their client

sides more and more



operating system

provides an interface for web applications.

The web interface places very few limits on client functionality. Through

and other technologies, application
specific methods such as drawing on the screen, playing audio, and
access to the keyboard and mouse are all possible. Many services have

worked to combine all of these into a more
familiar interface that adopts the appearance of an operating system. General purpose techniques such as
drag and

are also suppor
ted by these technologies. Web developers often use client
side scripting to add functionality,
especially to create an interactive experience that does not require page reloading. Recently, technologies have been
developed to coordinate client
side script
ing with server
side technologies such as
, a web development
technique using a combination of variou
s technologies, is an example of technology which creates a more interactive


Applications are usually broken into logical chunks called "tiers", where every tier is assigned a role.

applications consist only of 1 tier, which resides on the client machine, but web applications lend themselves to a n
tiered approach by nature.

Though many variations are possible, the most common structure is the


In its most common form, the three tiers are called

, in this order.
A web browser is the first tier (presentation), an engine using some dynamic Web content technolog
y (such as
Ruby on Rails

) is the middle tier (application
logic), and a database is the third tier (storage).

The web browser sends requests to the middle tier, which services
them by making queries and updates against the database and generates a user interface.

For more complex applications, a 3
tier solution may fall short, and you may nee
d a n
tiered approach, where the
greatest benefit is breaking the business logic, which resides on the application tier, into a more fine

Or adding an i
ntegration tier that separates the data tier from the rest of tiers by providing an easy
interface to access the data.

For example, you would access the client
data by calling a "list_clients()" function
instead of making a SQL query directly against the client table on the database. That allows you to replace the
underlying database without changing the other tiers.

There are some who view a web application as a two
tier architecture. This can be a "smart" client that performs all
the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.

The client would handle
the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would
be on one of them or on bot

While this increases the scalability of the applications and separates the display and
the database, it still doesn't allow for true specialization of layers, so mo
st applications will outgrow this model.

Business use

An emerging strategy for application software companies is to provide web access to software previously distribut
as local applications. Depending on the type of application, it may require the development of an entirely different
based interface, or merely adapting an existing application to use different presentation technology. These
programs allow the u
ser to pay a monthly or yearly fee for use of a software application without having to install it
on a local hard drive. A company which follows this strategy is known as an
application service provider

(ASP), and
ASPs are currently receiving much attention in the software industry.

Writing web applications

There are many
web application frameworks

which facilitate
rapid application development

by allowing the
programmer to define a high
description of the program.

In addition, there is potential for the development of
applications on
Internet operating systems
, although currently there are not many viable platforms that fit this

The use of web application frameworks can often reduce the number of errors in a program, both by making the
code simpler, and

by allowing one team to concentrate just on the framework. In applications which are exposed to

attempts on the Int
ernet, security
related problems can be caused by errors in the program.
Frameworks can also promote the use of best practices

such as
GET after POST


Browser applications typically include simple office software (
word processors
online spreadsheets
, and
presentation tools
), with
Google Docs

being the most notable example, and can also include more advanced
applications such as project management,
aided design
video editing

and point


Browser applications typically require little or no disk space on the client, upgrade automatically wit
h new features,
integrate easily into other server
side web procedures, such as email and searching. They also provide cross
platform compatibility in most cases (i.e., Windows, Mac, Linux, etc.) because they operate within a web browser


Standards compliance is an issue with any non
typical office document creator, which causes problems when file
sharing and collaboration becomes critical. Also, browser applications rely on application files accessed on remote
servers through the Internet.

Therefore, when connection is interrupted, the application is no longer usable but if it

API's such as
Web Wo

DOM storage

it can be downloaded and installed locally, for offline
Google Gears

is a good

example of n platform that improve the usability of browser applications.

Since many web applications are not
open source
, there is also a loss of flexibility, making users dependen
t on third
party servers, not allowing customizations on the software and preventing users from running applications

most cases). However, if
proprietary software

can be customized and run on the preferred server of the
rights owner


A typical web application starts when a user clicks a link in an HTML page and causes the browser to instigate an
HTTP request/response transaction with a web server, CGI program, and web application server. For the purposes of
this article
, I will refer to the web server, CGI program, and web application server as the targeted server.

The targeted server receives the HTTP request and dispatches the request and its associated data (such as headers
and query parameters) to the desired resourc
e residing on the targeted server. In the simplest case, the targeted
server responds with a static HTML page, which the browser interprets and presents to the user. In more complex
web applications, the targeted server must perform other functions such as

the following:

Execute business logic

Retrieve data according to query parameters

Authenticate and authorize users

Reestablish a session from a former request

Exchange data with legacy systems

Build the HTML response page dynamically to be passed back to
the browser

The Java 2 Platform, Enterprise Edition defines a set of architectures, technologies, and frameworks that simplify
and standardize distributed web application programming using the Java programming language. Java
based web
application servers i
ncorporate some of these architectures, technologies, and frameworks such as servlets, Java
Server Pages (JSP), and Enterprise JavaBeans (EJB) to provide a common programming platform for web
application programmers. WebSphere Application Server for NetWar
e is a Java
based web application server, thus
enabling you to use these technologies to build web applications.

The Project Plan

Writing the project plan provides a structured framework for thinking about how the project will be
conducted, and for consid
ering the project risks. Ultimately you cannot write a plan until you have a plan.
Having a comprehensive plan may require the involvement of a range of functional experts, and it often
requires the involvement of decision

A significant value of wr
iting a project plan is the process rather than the outcome. It forces the players to
think through their approach and make decisions about how to proceed. A project plan may require
making commitments, and so it can be both a difficult and important part
of establishing the project.

The project plan provides a vehicle to facilitate executive and customer review. It should make major
assumptions explicit and provide a forum for communicating the planned approach and for obtaining
appropriate approvals.

If t
he project team includes diverse organizations or ambiguous lines of authority and communication, it
may be useful to write a Project Management Plan to describe the roles and responsibilities of the various
organizational entities. It can also be used to
communicate management systems and procedures to be
used throughout the project.

The requirements definition and specifications tell us

the project needs to accomplish. The Project
Plan should tell us
by whom
, and f
or how much

If the proje
ct will be challenging, it is important to define and control the scope, schedule and cost so they
can be used as baselines for tracking progress and managing change. Defining
the project management

is the essence of a useful project plan.

There should be some version of the project plan written with wide scope, at a level of detail appropriate
for executive review. There should be top
level discussion of all the aspects of the project that one
want to communicate to the senior customer or sponsor managers. The project plan should address
implementation, training, and support plans as well as plans for future growth and expandability.

The project plan should demonstrate that all aspects of
the project have received careful thought and that
clearly defined plans for project execution and control have been formulated. The generation of plans at
the beginning of a project can be useful if they are concise and are used to set policy and procedur
es for
the conduct of different aspects of the project.

A large complex project may have many separate plans such as: Business Plan, Project Plan, Test Plan,
Acquisition Plan, Quality Assurance Plan, Integrated Logistics Support Plan, Public Relations Pla
Training Plan, Software Development Plan, Project Management Plan, Marketing Plan, Risk
Management Plan, Process Development Plan, Systems Engineering Management Plan, Staffing Plan,
Communications Plan, Configuration Management Plan, Data Management Pl
an, Implementation Plan,
Customer Service Plan, and so on.

Of course, many small or straightforward projects will have very little formal planning documentation,
and devoping a plan with extensive narrative would be pointless. The challenge is always to a
ssess project
risks and apply project management practices only as needed to the risks of your specific project
environment. The Scalable Methodolgy Guide provides practical assistance in tailoring best practices
project management techniques for the speci
fic needs of smaller projects.

Web Team Roles

These are the main roles in a web team. Different web projects require different web teams. So you may not need all
these roles filled for your web project.

Content expert/client

Project manager

rmation architect

Information designer

Instructional designer

Interaction designer

Visual designer

Content developer/web writer



web team roles

Content expert/client

The content expert is the person who knows the subject matter of the web site best. Very often the content expert is
you, the client. The responsiblities of the content expert are to: determine the project objectives
, target audience and
user needs; contribute to site/information architecture; provide the content materials for the web site; provide
additional resources; and provide feedback on content, design and interactivity development in relation to the
subject ma

Project manager

The project manager is responsible for planning and managing all the human and technological elements of a web
development project, from concept to completion. These responsiblities include: liaising with the content expert or
; planning, budgeting and preparation; concept design and arranging user research; briefing and managing the
rest of the web team; overseeing the content, creative and technical development; overseeing site testing, release and
evaluation; bringing the pro
ject to completion, within the time
franme and budget; and trouble

Information architect

The information architect is responsible for the process where we help our client determine the structure of their site.
Information architecture includes d
eciding how to organise the content so that it makes most sense to the users; how
to link the pages together; and how to navigate through the pages.

Information designer

The information designer is responsible for the process through which information is
configured for effective
communication. Information design helps make information understandable and easy to use by incorporating good
practice issues of graphic design, typography, systems and usability.

Instructional designer

Instructional design is a p
rocess whereby learning objectives are matched with the most appropriate instructional
practices and strategies. An instructional designer is responsible for creating engaging learning activities.

Interaction designer

The interaction designer is responsib
le for developing the user experience in a website or other digital medium,
focusing on elements such as navigation, layout and user interactions.

Visual designer

The visual designer is responsible for creating the overall look and feel of the web site an
d the visual style of the
individual web pages. The visual designer solves problems relating to graphic layout of web pages, typography,
colour selection, branding and overall consistency.

Common tasks include: image processing and web optimisation, develo
ping HTML templates and browser/platform
testing of the HTML layouts. Where necessary the visual designer may create original animation, illustration and

During the concept design stages, the visual designer collaborates with the project manag
er, client and content
developer to determine the information needs of the users, and to come up with a look and feel for the user interface
that will work in an interactive form. During design production, the visual designer works with the content
er and programmer to build the web site.

Content developer/web writer

The content developer/web writer creates the web site content

the information presented on the web site. The
content developer/web writer's responsibilities depend on what kind of con
tent is required, and may include:
assessing and analysing content needs and designing content layout solutions; researching content; developing and
writing new content or rewriting/editing existing content into a form that is appropriate for interactive m
edia and
adds value to the content materials; html coding and validating; inserting copy into templates or content
management systems; processing images; and maintaining live web site content.

During content production, the content developer works with the

content expert, information architect, visual
designer and programmer, and may provide advice relating to the presentation of site content.


The programmer builds the web site's functionality

the things it can do. A web project may need just
one or a
number of different programmers, depending on the size of the site, what you want it to do, and which programming
languages are required to make it work.

Programmers use authoring software tools to bring a web site to life and make it work. For ex
ample, they code web
sites that can publish data dynamically to the web using combined html and database technologies, and they build
over and fly
out navigation, interactive elements like games and forms, and user interfaces like embedded
content man
agement systems.

The programmer may be involved during project planning, to provide technical advice on the proposed web site.
During production, the programmer works with the information architect, content developer and visual designer.

Other web team r

You may also need:


To take original phtographs for the web site.

Evaluation expert

To conduct user research and user testing.

Compatability/useability tester

To test the web site during the final stages of production, to make sure it work
s on a
variety of platforms and that there are no bugs.