Ruby On Rails - BRAC University

shoulderscobblerInternet και Εφαρμογές Web

2 Φεβ 2013 (πριν από 4 χρόνια και 2 μήνες)

655 εμφανίσεις


In this paper I have discussed about “Web based software architecture and Reliability.” In
today’s world almost everything is linked with internet. Almost every organization is
getting there web sites. For the business world web site has become a major
Even for personal use people are using web site more than ever. So web based software
development has also increased in great number. Thus the trend of creating software that
is easier to understand and saves time.

Every day we get to see some new
software emerging. Some of


into an essential comp
onent of the ever developing WWW. In this paper I have
mentioned and talked about six new concept which are leaving their marks in the world
of web development. These are

Ruby, Ruby

on Rails, Web 2.0, Web 3.0, Automatic
Programming and POG (PHP Object Generator).



Ruby is the interpreted scripting language for quick and easy object


is a programming language just like Perl, Python
or PHP.
successfully combines Smalltalk's conceptual elegance, Python's ease of use and learning,
and Perl's pragmatism.

Ruby is a powerful and dynamic open source, object
language. Ruby runs on many platforms, including Linux and many flavor
s of UNIX,
DOS, Windows 9x/2000/NT, BeOS, and MacOS X. Ruby has adopted various
features from many languages, including Perl, Lisp, and Smalltalk, and it has become a
different language than the others.


The language


was created by Yu
kihiro "Matz" Matsumoto, who started working on
Ruby on February 24, 1993, and released it to the public in 1995.
Ruby is named after the
red jewel.
The latest stable version of Ruby is 1.8.6. Ruby 1.9 is also in development. The
new version
has some major

changes in it. These include JRuby, an attempt to port Ruby
to the Java platform, and Rubinius, an interpreter modeled after self
hosting Smalltalk
virtual machines. The main developers have thrown their weight behind the virtual
machine provided by the Y
ARV project, which was merged into the Ruby source tree on
31 December 2006, and will be released as Ruby 2.0.

Design Policy of Ruby

Principle of Conciseness

The creator of Ruby wants his computers to be his servants, not his masters. Thus,

like to give them orders quickly. A good servant should do a lot of work with a
short order.


Principle of Consistency

As with uniform object treatment, as stated before, a small set of rules covers the whole
Ruby language. Ruby is a relatively simple lan
guage, but it's not


tried to follow the principle of "least surprise." Ruby is not too unique, so a
programmer with basic knowledge of programming languages can learn it very quickly.

Principle of Flexibility

Because languages are

meant to express thought, a language should not restrict human
thought, but should help it. Ruby consists of an unchangeable small core (that is, syntax)
and arbitrary extensible class libraries. Because most things are done in libraries, you can
treat us
defined classes and objects just as you treat built
in ones.

Programming is incredibly less stressful in Ruby because of these principles.

Features of Ruby:

Ruby has simple syntax, partially inspired by Eiffel and Ada.

Ruby has exception handling
features, like Java or Python, to make it easy to
handle errors.

Ruby's operators are syntax sugar for the methods. You can redefine them easily.

Ruby is a complete, full, pure object oriented language: OOL. This means all
data in Ruby is an object, in t
he sense of Smalltalk: no exceptions. Example: In
Ruby, the number 1 is an instance of class Fixnum.

Ruby's OO is carefully designed to be both complete and open for
improvements. Example: Ruby has the ability to add methods to a class, or even
to an inst
ance during runtime. So, if needed, an instance of one class *can*
behave differently from other instances of the same class.

Ruby features single inheritance only, *on purpose*. But Ruby knows the
concept of modules (called Categories in Objective
C). Mo
dules are collections
of methods. Every class can import a module and so gets all its methods for free.
Some of us think that this is a much clearer way than multiple inheritance, which
is complex, and not used very often compared with single inheritance (
count C++ here, as it has often no other choice due to strong type checking!).

Ruby features true closures. Not just unnamed function, but with present
variable bindings.

Ruby features blocks in its syntax (code surrounded by '{' ... '}' or 'do' ..
. 'end').
These blocks can be passed to methods, or converted into closures.

Ruby features a true mark
sweep garbage collector. It works with all Ruby
objects. You don't have to care about maintaining reference counts in extension


Writing C

extensions in Ruby is easier than in Perl or Python, due partly to the
garbage collector, and partly to the fine extension API. SWIG interface is also

Ruby can load extension libraries dynamically if an OS allows.

Ruby features OS independent

threading. Thus, for all platforms on which Ruby
runs, you also have multithreading, regardless of if the OS supports it or not,
even on MS

Ruby is highly portable: it is developed mostly on Linux, but works on many
types of UNIX, DOS, Windows 95/98/
Me/NT/2000/XP, MacOS, BeOS, OS/2,

Target Pro

Ruby's primary focus is productivity of program development, and users will find that
programming in Ruby is productive and even fun.

Text processing

Ruby's File, String, and Regexp classes help
you process text
data quickly and cleanly.

CGI programming

Ruby has everything you need to do CGI programming,
including text
handling classes, a CGI library, database interface, and even eRuby
(embedded Ruby) and mod_ruby for Apache.

Network programming

etwork programming can be fun with Ruby's well
designed socket classes.

GUI programming

GUI tool kit interfaces such as Ruby/Tk and Ruby/Gtk are

XML programming

handling features and the UTF
aware regular
expression engine make XML progra
mming handy in Ruby. The interface to the
expat XML parser library is also available.


With its high productivity, Ruby is often used to make prototypes.
Prototypes sometimes become production systems by replacing the bottlenecks
with C written

Programming education

You can teach students that programming is fun


Ruby has two main implementations: the official Ruby interpreter, which is the most
widely used, and JRuby, a Java
based implementation.

Ruby in contrast

to other


Like Smalltalk Ruby is a dynamic and pure object
oriented language. Both languages are
dynamic because they do not use static type information. They are pure because all
values are objects and are classified into classes that

are objects themselves. Both are also


designed to be object
oriented languages from the beginning, and they both support
garbage collection.

In Smalltalk, control flow structures such as conditionals are done by sending messages
to the objects

at least, t
hat's how it appears. Sometimes this makes Smalltalk programs
unnatural and hard to read.

In Ruby, control flow structure is far more conservative. Smalltalk is an operating system
and a programming environment. The program is basically an image within the

environment that is constructed through interaction via browsers. Unlike Smalltalk
programs, Ruby programs are clearly separated from the language and its interpreter.


Ruby and two other great "P" languages (Perl and Python) often are classified as s
languages. They are scripting languages, but probably not in the sense that you imagine.
They are scripting languages for these reasons:

They support a fast development cycle (edit
edit) by interpreters. No
compilation is needed.

They focus on

quick programming by requiring you to code less. For example,
you don't have to deal with static types of variables. Fewer declarations are
needed in programs. Because of these attributes, these languages can be used for
everyday task one
liners. Imagine
developing a so
called one
liner (such as
scanning the log files) in C, for example.

A strong set of built
in libraries supports the handling of text and files.

Unfortunately, by the word
, many people imagine poor languages that can be
used only
for small programs. That was true in the past and is still true for some
languages, such as csh. After Perl, scripting languages are languages that focus on quick
development, although Perl still has the smell of old scripting attributes. So instead
ng of Ruby as a scripting language; think of it as a "dynamic object

Unlike Perl, Ruby is a genuine object
oriented language; OOP features are not an add
Ruby uses less punctuation ($
, @, %
, and so on), less context dependency, and
implicit type conversion, so Ruby programs tend to be less cryptic.


On the Python newsgroup, questions/requests/complaints such as the following seem to
crop up from time to time:

I dislike code structuring by indentation.

Why doesn’t Python ha
ve a "real" garbage collection?


Why are there two distinct data types, list and tuple?

Separating types and classes is annoying. Why
all values are

not class instances?

Why is no method available for numbers, tuples, and strings?

Explicit conversion betwee
n small integers and long integers is annoying.

Maintaining reference counts in the extensions is tiresome and error

Of course, the above are not always problems. Many Pythoneers live happily with these
attributes of Python, and some even consider t
hem features. The creator of Rbuy did not
think that most of them will be removed from a future Python, but all of these are already
solved in Ruby. From the

point of view, he has provided "a better Python than

Because Ruby supports a st
rong set of functions that are designed after Perl, Ruby
programs tend to be smaller and more concise than ones in Python. Ruby programs also
often run faster than their Python equivalents, partly because the Ruby interpreter uses
the method
cache techniqu

Ruby includes many preferable features that help programmers enjoy programming. This
language is good for a wide range of problem domains, from text
processing one
liners to
the full
featured GUI mail user agent.

Ruby On Rails


ils is a web
application and persistence framework that includes everything needed to
create database
backed web
applications according to the Model
Control pattern of
separation. This pattern splits the view or presentation into "dumb" templates that

primarily responsible for inserting pre
built data in between HTML tags. The model
contains the "smart" domain objects such as Account, Product, Person, Post, that holds all
the business logic and knows how to persist themselves to a database. The con
handles the incoming requests such as Save New Account, Update Product, Show Post,
by manipulating the model and directing data to the view.

Everyone from startups to non
profits to enterprise organizations

using Rails. Rails is
all about infra
structure so it's a great fit for practically any type of web application Be it
software for
collaboration, community, e
commerce, content management, statistics or

Rails works with a wealth of web servers and databases. For web server,
the rec
ommend servers are Apache or lighttpd, running either FastCGI or SCGI, or
Mongrel. For database, we can use MySQL, PostgreSQL, SQLite, Oracle, SQL Server,
DB2, or Firebird.


Ruby on Rails is a web application framework. It was released in 2004
. The aim of this
framework is to increase the speed and simplicity, with which database
driven web sites


can be created. Ruby on Rails was extracted by David Heinemeier Hansson from his
work on Basecamp, a project
management tool by the web
design company

This project was first released to the public in July 2004.

Then it was extended and
improved by a core team of committers and hundreds of open
source contributors. Rails
is released under the MIT License an
Ruby was
released under

the Ruby Li

Technical overview

Rails uses the Model
Controller (MVC) architecture for organizing applications.

MVC is basically the separation of these three concerns into three different layers.


The information the application wo
rks with, th
e model is usually

persisted to a database but that is not necessary


a representation of the mode
l, multiple views are possible
for one model, in

fact that is one o
f the benefits of using an MVC
pattern. In a Web application this is
lly an HTML

page but can also
be a Flash page or something else


the controller defines wha
t needs to happen on different
events, like

the user clicking on a
button, usually changes one or
multiple models and chooses the
correct view for the model

Fig: MVC (Model

Rails provides scaffolding,
a method of building database
backed software applications,
which can quickly construct most of the logic and views needed for a basic web site, the
(A Ruby library providing simple

HTTP web server services)
web server and
other helpful development tools. Rails is also known for its extensive use of the
Javascript Libraries Prototype and for Ajax

Asynchronous JavaScript
and XML

a web development technique for creat
ing interactive web applications.

visual effects.

Application of Ruby On Rails:

Basecamp is the original Ruby on Rails application from which the framework
was extracted. Basecamp was launched by 37signals in February of 2004 and
enjoys more th
an half a million people using the system. It's billed as Project
Collaboration Utopia and is all about making internal and client projects a


Fig: Basecamp



is the brainchild of Josh, Daniel, and Erik of The Robot Co

vision is nothing short of helping people achieve their goals in life. The site was
launched in August of 2004 and together with 43places and 43people, it handles
more than one mill
ion page views per day.

Fig: 43things


Penny Arcade, another
Ruby On Rails application. This site is powered by
vBulletin. Copyright of the site goes to Jelsoft Enterprises Ltd.


Fig: Penny Arcade


Comparison of Ruby On Rails with other web applications:

Ruby on Rails, JAVA (JSP/J2EE/whatever
based) or .net (ASP.NET), all these web
application frameworks are MVC
based. But Java and .NET based frameworks are
mature, have feature
API's and modern languages than Ruby On Rails.

Some of the key architectural features of Rails and traditional J2EE frameworks:

Rails and a typical J2EE Web stack

The f

compares the Rails stack to a typical J2EE Web stack comprised of the
cat servlet container, the Struts Web application framework, and the Hibernate
persistence framework.

Fig: Comparison of Rails and J2EE stacks

As you can see, the fundamental difference between the Rails stack and the components
that make up a common J2
based Web application is small. Both have a container in
which the application code will execute; an MVC framework that helps to separate the
application's model, view, and control; and a mechanism to persist data.

The front controller

Struts' ActionS
ervlet and Rails' DispatchServlet are both examples of the Front Controller
pattern; as such, they both provide the same functionality. They accept HTTP requests;
parse the URL, and forward processing of the request to an appropriate action. In the case



Struts, an action is a class that extends Action; for Rails, it is a class that extends
ActionController. The main difference between the two front controllers is how they
determine the action that processes a particular request.

Fig: URL
in Rails
and Struts

The action and the model

In both Rails and Struts, the action acts as a bridge between the front controller and the
model. The developer provides an implementation of an action in order to provide
specific processing of a
request. The front controller is responsible for
accepting the request and passing it off to a specific action.

Fig: Rails and Struts action hierarchy

Struts requires that the developer extend Action and override execute() in order to
process the reque

In Rails, you must extend ActionController::Base for the model to
participate in the

processing of a request. Rails
doesn't pool the instance of the
ActionController; instead, it creates a new instance for each request.

Shortcomings of Ruby on Ra

There are some important considerations to take into account regarding the adoption of
Ruby on Rails at this stage:

EJB Remoting
: Rails will not support EJB Remoting, a technology used to allow
Java desktop applications to connect to server
side Java

business objects.

Commercial Support options
: Commercial support options are vastly more limited
in the Rails world than in (e.g.) the Java world. There are small vendors offering
support for Rails, but nothing on the scale of JBoss, IBM, Oracle, etc. Ope
support (mailing lists, direct emails, IRC rooms, etc.) is typically quite friendly
and useful, but cannot guarantee specific turn
around windows.



The Rails project is a young project

just over a year has passed since the
first complete versions o
f Rails were released to the public. Rails is reportedly
two releases away from the “1.0” version, an important psychological milestone.

style clustering:

Rails does not use the JBoss application server model of
application server clustering.

e restart:

As we understand the current state of Rails database connection
handling, it appears that a database restart would likely require refreshing
database connections in the Rails application. This may imply an application
server restart.

phase c

Rails does not (yet) support a two
phase commit, i.e.,
transactions across multiple databases. This is difficult to perform in Java as well,
though not impossible.

Library support:

While Ruby library support is extensive, this shortfall is often
ed as a downside in Ruby. In contrast to Java, Perl, C++, and other popular
environments, there are fewer code libraries available for use with Ruby.


A common warning against the adoption of Ruby and/or Rails is that the
Ruby job market is dwarfed
by the job markets of more mainstream technologies
such as Java, and, consequently that Ruby programmers are going to be scarce as

Reporting Library
: Ruby doesn’t offer an off
shelf reporting tool. While
database reporting is certain
to be a requirement, it would be nice to
have a Ruby package which could do reporting similar to off
shelf Java tools.


of Ruby On Rails

Some claims that Rails is immature because Ruby has only recently taken off outside of
Japan, and b
ecause Rails is so new there are definite maturity issues here. For one thing,
Ruby is slower than Perl or PHP. Partly this is due to the power and flexibility Ruby
affords the programmer, but it’s also due to a lack of optimization. The Rails platform
elf can be unstable at times due to a lack of widespread testing. This is changing
quickly. Thousands of web developers are digging into Rails, and as a result, the first
wave of web hosts has already started offering Rails hosting. It is also said that Ra
lacks industrial
strength features. This criticism often comes from J2EE or .NET
developers who judge the quality of their web framework by the size and quantity of
available libraries. Rails simply can not compete on these criteria. If you are writing

focused web application with large chunks of functionality requiring libraries that aren't
available for Ruby or C then Rails is likely to be unsuitable.




Fig: Web 2.0 Meme Map

The Web is a highly interactive

medium for consuming content and conducting business.
A fundamental evolution of the Web that has made this possible has been termed "Web
The next iteration of the World Wide Web is a do
yourselfer's dream and a
collaborator's paradise. It lets
you tweak, tailor and tune the cyberworld any way you
like. It also links you to a community of like
minded surfers. It's Web 2.0, the new wave
of on
line technology.

The technologies behind Web 2.0 provide a richer user experience
and make use of informat
ion in unique ways. However, Web 2.0 is more than just
technology, as it encompasses social interactions and a variety of business models. It is
among the first concepts to combine technical, social, and business theories.

The table above prov
ides illustrations of some Web 1.0 or first
generation web
technologies in a given sphere and their Web 2.0 equivalents.

Wikipedia cites three definitions for Web 2.0:

A transition of websites from isolated information silos to sources of content and


A social phenomenon referring to an approach to creating and distributing Web
content itself, characterized by open communication, decentralization of
authority, freedom to share and re
use, and ‘the market as conversation.’

A shift in economic

value of the web, potentially equaling that of the dot com
boom of the late 1990’s.

It would be a mistake to think of Web 2.0 as one technology or one business model. It is
better to think of Web 2.0 as a loose term indicating a new generation of web sit
applications, and services.



refers to a perceived second
generation of Web based
communities and hosted services such as social networking sites, wikis and folksonomies
that facilitate collaboration and sharing between users.

Though the term su
ggests a new
version of the Web, it does not refer to an update to Internet or World Wide Web
technical specifications, but to changes in the ways the platform is used.


Web 2.0 was introduced by O'Reilly Media in 2004.

O'Reilly Media titled

a series of
conferences around the phrase, and it has since become widely adopted.

Earlier users of
the phrase "Web 2.0" employed it as a synonym for Semantic Web.

According to Tim
O'Reilly, Web 2.0 is about business embracing the web as a platform and ut
ilizing its
strengths. The concept of "Web 2.0" began with a conference brainstorming session
between O'Reilly and MediaLive International.

Principles of Web 2.0:

There are 7 principles of Web 2.0. These principles are derived by the O'Reilly Media
The principles are:

The Web As Platform:

Software running as a service over the web (like Google), rather than on a desktop
computer (like Netscape).

Harnessing Collective Intelligence:

Aggregating information from lots of people rather than a few expe
rts (

community of reviewers, Google's use of links to drive its search algorithms).

Data is the Next Intel Inside:

Owning a specialized database (

database of products, NavTeq's
mapping data).

End of the Software Release Cycle:

High frequ
ency of release of new functionality (Flickr deploying new builds up to
every half an hour)

Lightweight Programming Models:

Simple development environments that are easy for consumers to re
use (Google's
simple mapping interface).

Software Above the Level

of a Single Device:

Lots of devices working together with the web (iTunes and iPods).


Rich User Experiences:

More dynamic user
interfaces closing the gap between web and rich clients.

Web 2.0 Design Patterns:

Christopher Alexander prescribed a for
mat for the concise description of the solution to
architectural problems. According to him:

The Long Tail

Small sites make up the bulk of the internet's content; narrow niches make up the
bulk of internet's the possible applications. Therefore: Leverage
service and algorithmic data management to reach out to the entire web, to the
edges and not just the center, to the long tail and not just the head.

Data is the Next Intel Inside

Applications are increasingly data
driven. Therefore: For comp
etitive advantage,
seek to own a unique, hard
recreate source of data.

Users Add Value

The key to competitive advantage in internet applications is the extent to which
users add their own data to that which you provide. Therefore: Don't restrict your
rchitecture of participation" to software development. Involve your users both
implicitly and explicitly in adding value to your application.

Network Effects by Default

Only a small percentage of users will go to the trouble of adding value to your
tion. Therefore: Set inclusive defaults for aggregating user data as a side
effect of their use of the application.

Some Rights Reserved. Intellectual property protection limits re
use and prevents
experimentation. Therefore: When benefits come from collec
tive adoption, not
private restriction, make sure that barriers to adoption are low. Follow existing
standards, and use licenses with as few restrictions as possible. Design for
"hackability" and "remixability."

The Perpetual Beta

When devices and programs

are connected to the internet, applications are no
longer software artifacts, they are ongoing services. Therefore: Don't package up
new features into monolithic releases, but instead add them on a regular basis as
part of the normal user experience. Enga
ge your users as real
time testers, and
instrument the service so that you know how people use the new features.

Cooperate, Don't Control

Web 2.0 applications are built of a network of cooperating data services.
Therefore: Offer web services interfaces and

content syndication, and re
use the
data services of others. Support lightweight programming models that allow for
coupled systems.

Software Above the Level of a Single Device

The PC is no longer the only access device for internet applications, a
applications that are limited to a single device are less valuable than those that are


connected. Therefore: Design your application from the get
go to integrate
services across handheld devices, PCs, and internet servers.

Hierarchy of "Web 2.0

Level 3:

The application could ONLY exist on the net, and draws its essential
power from the network and the connections it makes possible between people or
applications. These are applications that harness network effects to get better the
more people

use them. EBay, craigslist, Wikipedia,, Skype, and
Dodgeball meet this test. They are fundamentally driven by shared online activity.
The web itself has this character, which Google and other search engines have
then leveraged. (You can search

on the desktop, but without link activity, many of
the techniques that make web search work so well are not available to you.) Web
crawling is one of the fundamental Web 2.0 activities, and search applications like
Adsense for Content also clearly have We
b 2.0 at their heart. I had a conversation
with Eric Schmidt, the CEO of Google, the other day, and he summed up his
philosophy and strategy as "Don't fight the internet." In the hierarchy of web 2.0
applications, the highest level is to embrace the networ
k, to understand what
creates network effects, and then to harness them in everything you do.

Level 2:

The application could exist offline, but it is uniquely advantaged by
being online. Flickr is a great example. You can have a local photo management
ication (like iPhoto) but the application gains remarkable power by leveraging
an online community. In fact, the shared photo database, the online community,
and the artifacts it creates (like the tag database) is central to what distinguishes
Flickr from
its offline counterparts. And its fuller embrace of the internet (for
example, that the default state of uploaded photos is "public") is what
distinguishes it from its online predecessors.

Level 1:

The application can and does exist successfully offline, b
ut it gains
additional features by being online. Writely is a great example. If you want to do
collaborative editing, its online component is terrific, but if you want to write
alone, as Fallows did, it gives you little benefit.

Level 0:

The application ha
s primarily taken hold online, but it would work just as
well offline if you had all the data in a local cache. MapQuest, Yahoo! Local, and
Google Maps are all in this category (but mashups like are at
Level 3.) To the extent that online ma
pping applications harness user
contributions, they jump to Level 2.


of Web 2.0

Web 2.0 is as much about business models and social interactions as it is about
technology. In fact, some critics claim that one of the key differences betwe
en “Web
1.0” and the current web is the reduced significance of technology: many of the standards
(XML[3], SOAP[4], and others) and infrastructure elements (such as Linux[5] or
Apache[6]) used in Web 2.0 applications have been taken for granted as de
established market standards for years.


Perpetual Beta:

The idea behind perpetual beta software is to release the software to the public

sometimes only as a crude working prototype

in order to solicit feedback
from early users and critics, gauge
the level of excitement about the idea, and
establish a footprint in a rapidly growing market; or at least establish a footprint in
a market that the company founders think might be growing rapidly.

In the Web
2.0 world, beta testing is almost the only typ
e of testing done, and some
organizations claim it is never finished. This is where the “perpetual” part of the
term perpetual beta comes from.

Fig: The GMail Logo

For an example of perpetual beta software, consider GMail, Goog
le’s electronic
mail se

GMail has been available since early 2004 and now counts several
million users. It has gone through at least a dozen releases, each fixing bugs and
adding new features. These releases are largely transparent to users, who at most
see a notice at t
he bottom of their screen informing them of the new features.
However, the capital “BETA” text
part of the GMail logos on every screen serves
as a constant reminder of and caveats

to Gmail’s status.

User Network Effects:

Another key technical aspect of
Web 2.0 applications is that they rely on a
massive number of users in order to deliver value. Traditional applications need
user volume to drive profitability, but the value of the application to any one
individual is the same regardless of how many users

of the application there are.
For example, Microsoft Word is as useful to an individual author whether it is
used by one other person or a billion other people; sharing can always be done
using a lowest
denominator format such as plain text.

is sometimes
called Metcalfe’s Law: the value of a system is proportional to the square of the
number of users participating in the system. Web 2.0 companies tend to rely on
this Law quite heavily, although its universality remains questionable.

Fig: N
etwork Effects: Metcalfe's Law Suggests Value as the Square of Users.


Users as Developers:

Another underlying trend in Web 2.0 applications is the exposure of public
cation programming interfaces (APIs). While traditional organizations
frequently expose APIs to partners and allies so that the latter may better design
complementary products, Web 2.0 organizations expose their APIs to the general
public. There are two g
oals for API exposure. The first is to attract developers,
who are considered a special type of user, and hope that these developers not only
start using the product, but they start advertising it on their blogs and via word of

The second goal for
exposing an API to the public is to stimulate creation
of additional applications using the data and API provided by the organization.

One example of public API exposure is the Google Maps API.

Google Maps Logo (
"BETA" as in perpetual beta)

Rich U
ser Experience:

A richer user experience is a central part of Web 2.0 approaches. “Richer” in this
context has two distinct meanings. The first is a better
designed, more intuitive,
and more user
friendly graphical user interface. The other is a cleane
r interface,
one that is not excessively crowded with options and links.

A number of the
technologies in Web 2.0 aim to make the user interface more like that of a rich
client. One is called AJAX, for Asynchronous JavaScript And XML. It is a
technology t
hat allows dynamic replacement and updating of parts of a web page
without requiring a full page refresh from the server.

An interesting AJAX Web
2.0 example is WickIT, a Belgian wiki provider.

Fig: WickIT

Granular Addressability of Content:

addressability of content,” which refers to users being able to access
only those parts of the applications they want at the time they want it. For an anti
example, consider again Microsoft Word. The user has to open the full
application to read even a s
mall section of a document. It would be nice if the
user could instead open a lightweight client to view only the section of interest.
This is the kind of functionality many Web 2.0 applications provide.

One typical
technique used to implement this granu
lar approach is user
customized RSS
(Really Simple Syndication) feeds. For example, the Google News site not only
provides pre
canned RSS feeds for categories such as sports or entertainment, but


also allows the user to compose a search query using any ke
y words, and then
save the results of that query as an RSS feed whose new updates are delivered to
the user’s mailbox as they become available.


of Web 2.0


Fig: The Wikipedia Logo

It is very easy to create articles of vary
ing quality, but it is also easy to improve them.

As the name suggests, Wikipedia is built on top of a wiki. A wiki is a web application
that allows the easy creation and modification of content directly through a browser using
a simple markup language. I
t is intended to promote group collaboration and thus
participation, a key principle of Web 2.0. Wikipedia is the type of software that gets
better as more people use it. Conceivably, if users find mistakes or problems with
articles, they can easily correc
t them for the benefit of all.

Wikipedia has become one of
the most visited web sites on the Internet by providing a very valuable service to users.

Google Maps:

Fig: The Google Maps Logo

The first application that "felt" different and helped differen
tiate Web 2.0 was Google

Google Maps is an interactive map service that combines context
information such as local restaurants.

Google Maps has become an emblem for a Web 2.0

Google Maps has many of the attributes of Web 2.0. Th
e first, and perhaps
most profound, was the use of AJAX that allowed for a much richer client experience.
Another key aspect of Google Maps which increased its adoption was the support of an

Fig: The Logo

18 is a

social bookmarking site that allows people to share their bookmarks,
comment on them, and tag them.

The primary Web 2.0 feature of is tagging.
Tags allow users to essentially vote on the topics they think are most applicable to a
particular w
eb page. is another Web 2.0 application that is widely used, but is
lacking a good business model. Some people have speculated that may be
attractive mainly to technical users. However, this did not stop Yahoo! from purchasing


Fig: The flickr Logo

The best example of an application created by a startup that makes good use of Web 2.0
technology and has a viable business model is the photo sharing site Flickr.

While there
were many of photo
sharing sites on t
he web, most restricted access to photos. Flickr
took the opposite approach: they came up with as many ways as possible to allow people
to access their photos.

Much like Google Maps, Flickr is the epitome of a Web 2.0

Comparison between

Web 2.0 and Ruby On Rails:

Web 2.0 is a
collection of technologies and business models and Ruby On Rails is a Web
application framework. Where
Rails is a web
application and persistence framework and
Ruby is the interpreted scripting language for quick an
d easy object


of Web 2.0

Many of the ideas of Web 2.0 already featured on networked systems well before the
term "Web 2.0" emerged., for instance, has allowed users to write reviews
and consumer guides since

its launch in 1995, in a form of self
publishing. Amazon also
opened its API to outside developers in 2002.

On the other hand
, when a web

itself "Web 2.0" for t
he use of some trivial
such as blogs or gradient

observers may ge
nerally consider it more an
attempt at self
promotion than an actual endorsement of the ideas behind Web 2.0. "Web
2.0" in such circumstances has sometimes sunk simply to the status of a ma
buzzword, like "synergy


Web 3.0 or Semantic Web


We are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web
3.0, which fully arrives in 2010.
In some respects, Web 3.0 is nothing more than a parlor
game. Ideas tossed out here and there. But at the very least, thes
e ideas have roots in
current trends. Many companies, from HP and Yahoo! to Radar Networks, are adopting
official Semantic Web standards. Polar Rose and Ojos are improving image search.
Google and Microsoft are moving toward 3D. No one can predict what Web

3.0 will look
like. But one thing's for sure: It'll happen.

semantic web

is an evolving extension of the World Wide Web in which web content
can be expressed not only in natural language, but also in a form that can be understood,
interpreted and use
d by software agents, thus permitting them to find, share and integrate
information more easily. Some elements of the semantic web are expressed as
prospective future possibilities that have yet to be implemented or realized. Other
elements of the semantic

web are expressed in formal specifications. Some of these
include Resource Description Framework (RDF), a variety of data interchange formats
(e.g RDF/XML, N3, Turtle, N
Triples), and notations such as RDF Schema (RDFS) and
the Web Ontology Language (OWL)
. All of which are intended to formally describe
concepts, terms, and relationships within a given knowledge domain.
For example, a
computer might be instructed to list the prices of flat screen HDTVs larger than 40 inches
with 1080p resolution at shops in
the nearest town that are open until 8pm on Tuesday
evenings. To do this today requires search engines that are individually tailored to every
website being searched. The semantic web provides a common standard (RDF) for
websites to publish the relevant in
formation in a more readily machine
processable and
integratable form.

Nova Spivack, founder and CEO of Radar Networks,
defined the
semantic Web

in a long post on his blog:

“The Semantic Web is a set of technologies which are designed to enable a particu
vision for the future of the Web

a future in which all knowledge exists on the Web in a
format that software applications can understand and reason about. By making
knowledge more accessible to software, software will essentially become able to
stand knowledge, think about knowledge, and create new knowledge. In other
words, software will be able to be more intelligent

not as intelligent as humans perhaps,
but more intelligent than say, your word processor is today.”


Tim Berners
Lee, Ora Lassila and Jim Hendler are
behind the

concept of Semantic Web.

2001 Scientific American article

introduced this

concept to the world.
Tim Berners
Lee originally expressed the vision of the semantic web as follows:


“I have a dream for the W
eb [in which computers] become capable of analyzing all the
data on the Web

the content, links, and transactions between people and computers. A
‘Semantic Web’, which should make this possible, has yet to emerge, but when it does,
the day
day mechanis
ms of trade, bureaucracy and our daily lives will be handled by
machines talking to machines. The ‘intelligent agents’ people have touted for ages will
finally materialize.”


Currently, the World Wide Web is based mainly on documents written i
Markup Language (HTML)
, a markup convention that is used for coding a body of text
interspersed with multimedia objects such as images and interactive forms. The semantic
web involves publishing the data in a language,
Resource Description Fram

specifically for data, so that it can be manipulated and combined just as can data files on
a local computer.

The idea is to have machine
readable information shadowing the human
readable stuff.
So if you have a page that says, "My name is Kat
e Long. Here's a picture of my
daughter," the machine realizes that you are a person, that you have a first name and a
last name, that you are the father of another person, and that she's a female person. The
level of information a machine needs would vary

from application to application, but just
a little of this could go a long way

as long as it can all be linked together. And the
linking is the Web part of the Semantic Web. This is all about adding meaning to the stuff
we put on the Web

and then linking
that meaning together.

of Web 3.0 or Semantic Web:

The Semantic Web is not just a single Web. There won't be one Semantic Web; there will
be thousands or even millions of them, each in their own area. They will connect together
over tim
e, forming a tapestry. But nobody will own this or run this as a single service.

The Semantic Web is not separate from the existing Web. The Semantic Web won't be a
new Web apart from the Web we already have. It simply adds new metadata to the

Web. It merges right into the existing HTML Web just like XML does, except
this new metadata is in RDF.

The Semantic Web is not just about unstructured data. In fact, the Semantic Web is really
about structured data: it provides a means (RDF) to turn
any content or data into
structured data that other software can make use of. This is really what RDF enables.

The Semantic Web does not require complex ontologies. Even without making use of
OWL and more sophisticated ontologies, powerful data
sharing and

integration can
be enabled on the existing Web using even just RDF alone.


The Semantic Web does not only exist on Web pages. RDF works inside of applications
and databases, not just on Web pages. Calling it a "Web" is a misnomer of sorts

not j
ust about the Web, it's about all information, data and applications.

The Semantic Web is not only about AI, and doesn't require it. There are huge benefits
from the Semantic Web without ever using a single line of artificial intelligence code.
While the n
generation of AI will certainly be enabled by richer semantics, is not the
only benefit of RDF. Making data available in RDF makes it more accessible,
integratable, and reusable

regardless of any AI. The long
term future of the Semantic
Web is AI for


but to get immediate benefits from RDF no AI is necessary.

The Semantic Web is not only about mining, search engines and spidering. Application
developers and content providers, and end
users, can benefit from using the Semantic
Web (RDF) within th
eir own services, regardless of whether they expose that RDF
metadata to outside parties. RDF is useful without doing any data

it can be
baked right into content within authoring tools and created transparently when
information is published. RDF m
akes content more manageable and frees developers and
content providers from having to look at relational data models. It also gives end
better ways to collect and manage content they find.

The Semantic Web is not just research. It's already in use a
nd starting to reach the
market. The government uses it of course. But also so do companies like Adobe, and
more recently Yahoo (Yahoo Food is built on the Semantic Web). And one flavor of RSS
is defined with RDF. Oracle has released native RDF support in
their products.

Technology of Web 3.0:

The semantic web comprises the standards and tools of
XML, XML Schema,


. The
OWL Web Ontology Language Overview

describes the function
and relationship of each of these components of the semantic web:


provides a surface syntax for structured documents, but imposes no
constraints on the meaning of these documents.

XML Schema

is a language for restricting the structure and content elements of
XML documents.


is a simple
data model
for referring to objects ("
") and how they
are related. An RDF
based model ca
n be represented in XML syntax.

RDF Schema
is a vocabulary for describing properties and classes of RDF
resources, with a semantics for generalization
hierarchies of such properties and

adds more vocabulary for describing properties and clas
ses: among others,
relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"),
equality, richer typing of properties and characteristics of properties (e.g.
symmetry), and enumerated classes.


is a protocol and query language fo
r semantic web data sources.




A popular application of the semantic web is Friend of a Friend (or FoaF), which
describes relationships among people and other agents in terms of RDF.


The SIOC Project

Interlinked Online

Communities provides a vocabulary
of terms and relationships that model web data spaces. Examples of such data spaces
include: discussion forums, weblogs, blogrolls / feed subscriptions, mailing lists, shared
bookmarks, image galleries, and others.


SIMILE is a joint project conducted by the MIT Libraries and MIT CSAIL. That seeks to
enhance interoperability among digital assets, schemata/vocabularies/ontologies, meta
data, and services.

Comparison between Web 2.0 and Web 3.0:

Web 2.0 was really

about upgrading the "front
end" and user
experience of the Web.
Much of the innovation taking place today is about starting to upgrade the "backend" of
the Web and that will be the focus of Web 3.0. The front
end will probably not be that
different from W
eb 2.0, but the underlying technologies will advance significantly
enabling new capabilities and features.

To have a sharper understanding of the stakes of this web 3.0, it is important to look at

models, to compare them with actual models (web 2.
0 oriented) and to anticipate
a near future.

Web 1.0: an integrate experience

The first version of the modern web, the one corresponding to the end of the 90's, is
basically based upon an integrated experience from beginning to end by big actors.

If we t
ake the example of choosing and buying a cultural product (a book or a CD), one
of the most complex online experience, we can see that
actors like

are present on
every link of the value chain

products' discovery within home or orientation pages


evaluation with users' notes and reviews



purchase with wish lists or shopping cart


payment thanks to an integrated service.

Web 2.0: a collaborative and destructured experience

If we now look at power users, they have access to a much wider array of
information sources and merchant services. Those stands as
new links which substitute
for older ones in the value chain

We are now observing major shifts in the user experience:

products are discovered in blogs, social networks, on recommendations engines
like Pandora or within shopping community like ShopWiki


choice can be validated on
social shopping

portals like Crow
dstorm or on
specialized sites like LibraryThing (for books) or Yahoo! Tech (for gadgets)


purchase can be made on shopping engines like the ones provided by Amazon
(aStore), eBay (eBay Stores) or Zlio


payment can be made thanks to deported systems li
ke PayPal or Google

While Web 3.0 might now have a concept to hang itself on, we will remain in the midst
of the Web 2.0 era for several more years. The semantic Web is still incubating and will
take many turns of the crank to become mainstream.

Web 3.0: an immersive and extended experience

If we anticipate growing innovating services, we can again identify
new links for the
value chain which is no longer limited to the web

Users' buying experience will be more immersive but also extended outside of web

products' discovery could be made inside virtual worlds (like the ones from
Habbo Hotel and Second Life), in
side online gaming network (like World of
Warcraft or Xbox Live) or thanks to widgets (like those provided by Apple's
Dashboard or Yahoo! Widget)



products' evaluation could be based on independent services which relies on
universal reputation management

systems (as those provided by BazaarVoice,
iKarma or Rapleaf)


purchase could be made on merchant

like Cooqy or through connected
applications like the Mozilla Amazon Browser


finally, payment could be directly handled by the operating system
(by using the
upcoming CardSpace in Vista), on other devices (like mobile devices with Mobile
PayPal) or with virtual means of payment (Linden Dollars for example, since
banks are working hard on providing banking services in
Second Life


f Web 3.0

Practical feasibility

Some critics question the basic feasibility of a complete or even partial fulfillment of the
semantic web. Some approach the critique from the perspective of human behavior and
personal preferences, which ostensibly diminis
h the likelihood of its fulfillment.

An unrealized idea

The original 2001 Scientific American article (from Berners
Lee) described an
expectation of an evolution of the existing Web to a Semantic Web. Such an evolution
has yet to occur, indeed a more recen
t article from Berners
Lee and colleagues stated
that: "This simple idea, however, remains largely unrealized."

Censorship, and privacy

Enthusiasm about the semantic web could be tempered by concerns regarding censorship
and privacy. For instance, text
lyzing techniques can now be easily bypassed by using
other words, metaphors for instance, or by using images in place of words. An advanced
implementation of the semantic web would make it a lot easier for governments to
control the viewing and creation o
f online information as this information would be much
easier for an automated content
blocking machine to understand. In addition, the issue
has also been raised that with the use of FOAF files and Geolocation metadata, there
would be very little anonymit
y associated with the authorship of articles on things such as
a personal blog.

Doubling output formats

Another criticism of the semantic web is that it would be much more time
consuming to
create and publish content as there would need to be two formats f
or one piece of data.
One format would need to be specialized for human viewing and the other would have to
be specialized for machines. With this being the case, it would be much less likely for
companies to adopt these practices as it would only slow dow
n their progress.


Automatic Programming


Automatic Programming is an old term for any software system that generates code from
level specifications. This is a category of artificial intelligence (AI) research, and
includes PBE in
terfaces that generate programs from examples of input and output or
traces of an execution.
In computer science, the term automatic programming identifies a
type of computer programming in which some mechanism generates a computer program
rather than have

human programmers write the code.

Computers that can program
themselves is an old dream of Artificial Intelligence, but only nowadays there is some
progress of remark. In relation to Machine Learning, a computer program is the most
powerful structure that

can be learned, pushing the final goal well beyond neural
networks or decision trees. There are currently many separate areas, working
independently, related to automatic programming, both deductive and inductive.

Summary of


Strongly Typed Gene
tic Programming by David J. Montana

Genetic programming is a powerful method for automatically generating computer
programs via the process of natural selection (Koza, 1992). However, in its standard
form, there is no way to restrict the programs it gener
ates to those where the functions
operate on appropriate data types. In the case when the programs manipulate multiple
data types and contain functions designed to operate on particular data types, this can
lead to unnecessarily large search times and/or u
nnecessarily poor generalization

performance. Strongly typed genetic programming (STGP) is an enhanced version of

programming which enforces data type constraints and whose use of generic
functions and generic data

types makes it more powerful than

other approaches to type
constraint enforcement. After describing

its operation, we illustrate its use on problems in
two domains, matrix/vector manipulation and list

manipulation, which require its
generality. The examples are: (1) the multi

regression problem,
(2) the multi
dimensional Kalman _lter, (3) the list manipulation function NTH,

and (4)
the list manipulation function MAPCAR.

In this paper the author discussed about the use of a genetic algorithm to search through a
ace of possible computer programs for one which is nearly optimal in its ability to
perform a particular task and illustrates method of automatic programming using genetic
algorithms. In this paper the author also
shed light on


five components of a gen

Representation, Evolution, Function, Initialization, Genetic Operators and


John R. Koza (
Stanford University) and
Riccardo Poli (
Department of Computer Science, University of Essex, UK)

Genetic program
ming is a systematic method for getting computers

to automatically
solve a problem starting from a high
level statement of

what needs to be done. Genetic
programming is a domain

method that genetically breeds a population of
computer programs t

solve a problem.
, genetic programming iteratively

a population of computer programs into a new generation of programs

applying analogs of naturally occurring genetic operations.

Genetic programming is an extension of the geneti
c algorithm.

The authors of this paper discussed the concept of genetic programming. They
about the





PROGRAMMING (Constrained Syntactic Structures
Automatically De.ned
Automatically De.ned Iterations, Loops,

Recursions and Stores
Architecture and

Altering Operations
Genetic Programming Problem

Genetic Programming)


Approaches to Automatic Programming by Charles Rich and Richard C. Waters

This paper is an

overview of current approaches to automatic programming organized
around t
hree fundamental questions that must be addressed in the design of any automatic
programming system: What does the user see? How does the system work? What does
the system know?

Automatic programming has been a goal of computer science and
artificial intel
ligence since the first programmer came face to face with the difficulties of
programming. As befits such a long
tern goal, it has been a moving target
shifting to reflect increasing expectations.

The content of this paper provides us with the
facts about the automatic programming.
The authors have discussed about many myth in relation to automatic programming and
also about the system required for it.


The Straight
Line Automatic Programming Problem by
Rajeev Joshi, Greg
Nelson, and Yunhong Zho
HP Systems Research Center

The paper presents a design for the Denali
2 super
optimizer, which will generate
length machine code for realistic machine architectures using
automatic theorem
proving technology: specifically, using E
raph matching (a
technique for pattern matching in the presence of equality information) and boolean
satisfiability solving. The paper presents a precise definition of the underlying automatic
programming problem solved by the Denali
2 super
optimizer. It
sketches the E
matching phase and presents a detailed exposition and proof of correctness of the
reduction of the automatic programming problem to the boolean satisfiability problem.

In this paper the authors discuss about the problem of automatic p
rogramming and also
presented a design a super optimizer Denali
2, which will reduce the problem. The
Automatic Programming Problem is the problem of automatically finding a program that
meets a given specification. For example, the problem of finding a C
program that meets
a specification given by a precondition and a postcondition is an instance of the automatic
programming problem. The “planning problems” of Artificial Intelligence are also
instances of the automatic programming problem.

Automatic Prog

Properties and Performance of Fortran System I and II
by J.W. BacKus

brief general discussion of the goals and methods of automatic programming
techniques is followed by a somewhat detailed description of the input languages of
FORTRAN Automati
c Coding Systems I and 11, The statements of these inputdanguages
provide ( in part) concise means for writing algebraic expressions, for specifying iterative
repetitions of portions of a procedure, for referring t o one, two and three dimensional
arrays o
f data, for

defining new functions and procedures, and for specifying input and

procedures, Recent extensions of the input language are described


been in use for a year an
d a half. Over half the instruc
tions being
written for some si
xty 704
installations are thought t
o be

produced by FORTRAN, The
cost of programming and


is reduced by

about 4
1, A few other statistic
s and
applications are cited.

The final


discusses the relationship of automatic
programming and the

hanization of thought processes.

This paper informs us about the techniques of automatic programming. The author
discussed the use of FORTRAN elaborately.


From the above research papers we get the picture of automatic programming and
how it
is becoming a reality. Even though the concept is very old but actual implementation of
automatic programming is yet to come.

easy way to explain the term automatic
programming is

Server that will write computer programs for you


Automatic p
rogramming talks about a system where the computer it self will be able to
create new programs. Computer program will analysis the situation and requirement and
decide by itself how the program should be developed. Developers are trying to create a
which will be able to do that. Automatic programming is a major part of Artificial
Intelligence (AI). It is yet to become reality but it is the future.



PHP Object Generator

is an object
oriented code generator for PHP 4 a
nd PHP 5 which
uses the Object Relational Mapping (ORM) programming pattern to accelerate web
development. ORM allows developers to ‘forget’ about the underlying database and
instead think about their web application in terms of objects. Normally, implemen
this programming pattern makes the application easier to maintain in the long run, but
since the initial code is generated for you, POG also gives you a head start.

PHP Object Generator was created simply because there is a lack of good PHP code
ators on the internet. Development efforts by other individuals or groups seem to be
focused mostly on creating PHP frameworks whereas we focus on making the best code
generator possible.


POG started in 2005, when Active record was gaining
popularity. The POG community
has since been growing steadily. POG has been featured on several PHP news websites
such as International PHP Magazine and PHP 5 Magazine.

Features of POG:

POG (PHP Object Generator) saves the PHP developer time by gene
rating tested and
efficient PHP Objects. The programming pattern behind POG is Object
mapping also known as Active Record or Persistence Layer. Above all else, the generated
code has been designed to be extremely clean, easy to understand and us

Key features of POG are:

Generates clean & tested code

Generates CRUD methods

Generates setup file

Generates parent
child relations

Compatible with PHP4 & PHP5

Compatible with PDO

Automatic data encoding

Free Developer SOAP API


Free for persona
l use

Free for commercial use

Open Source

Compatible with PHP4 and PHP5

Supplies integrated CRUD for database and simplified querying

Supports multiple databases through different PHP Data Objects drivers

Supports generation of parent / child object rela

Techniques of POG:

The PHP objects are generated along with 5 CRUD Methods. The first 4 CRUD methods
allow you to Save(), SaveNew(), Get () and Delete() objects easily to and from your
database. The 5th CRUD method, GetList(), allows you to ret
rieve a list of objects from
the database that meet certain conditions.


The Save() CRUD method allows you to insert an object into your database. If the
object already exists in the database, the object will be updated instead.


The Get() CRUD

method allows you to retrieve an object from your database, and
must be supplied the Id of the object you want to retrieve. Since POG objects map
to rows in a database table, you can think of GET as a method that allows you to
fetch a specific row from yo
ur table, given that you specify the object id.


The SaveNew() CRUD method allows you to clone an object and save it to your
database. SaveNew() can also be used in situations where you want to force an
INSERT, rather than let POG decide whether t


The GetList() CRUD method allows you to return a list of objects from your
database using specific conditions. The GetList() method supports specifying
multiple conditions, sorting and limiting the result set.


The Del
ete() CRUD method allows you to delete an object from your database.


The Delete List() CRUD method allows you to delete all objects from your
database that match certain conditions .

A working demo can be found at


POG is a new concept. The reason behind the making

of POG is to save time while
programming. When you are using POG you don’t have to write any SQL queries to so
simple operation su
ch as
Inserts, Retrieve etc. POG will help you get started faster and
make your web application easier to maintain in the long run.



Ruby on Rails:


1. Suresh

Mahadevan, Making Use of Ruby, Wiley Publishing, Inc.

2. David

A. Black, Ruby for

Rails, MANNING, Greenwich

3. Chad


Rails Recipes, The Pragmatic Bookshelf, Raleigh, North California, Dallas,

4. Dave

Thomas with Chad Fowler and Andy Hunt, Programming Ruby,

5. John

McCreesh, Four Days On Rails


1. Jeffrey Hicks,
A Many
Many tutorial for Rails,
, September 4
, 2005

Related Links:



Who is already on Rails?

3. Start At The Beginning


What's Ruby



7. Everything on Ruby on Rails

What's all the fuss about Ruby On Rails?
The Joel on Software Discussion Group



Web 2.0:

Related Links:



What Is Web 2.0? O’Reilly

The new web: Rewards and risks for businesses

Web 2.0 Workgroup

Y Combinator, Does "Web 2.0" mean anything?

Dharmesh Shah, Ilana Davidi, Yoav Shapira, and Rob
bie Allen,
Web 2.0: Hype, Reality, or the

The amorality of Web 2.0


Web 2.Clueless, Discussion Group


Web 3.0:

Related Links:

Stephen Downes,
Why the Semantic Web Will Fail
Moncton, New Brunswick, CA

Evolving Trends, Web 3.0: Basic Concept


Semantic Web

Dan Farber & Larry Dignan,
Web 2.0 isn’t dead, but Web 3.0 is bubbling up

Article, Wikipedia 3.0: The End of Google?

Automatic Programming:



2. Rajeev

Joshi, Greg Nelson, Yunhong Zhou, The Straight
Line Automatic Programming
Problem, HP Laboratories Palo Alto, November 20
, 2003

3. David J. Montana, Str
ongly Typed Genetic Programming, Cambridge, November 20
, 2002

4. Charles Rich and Richard C. Waters, Approaches to Automatic Programming, Mitsubishi
Electric Research Laboratories, Cambridge Research Center, July 1992


1. Ricardo Aler, Genetic
Programming, Automatic Inductive Programming Tutorial, ICML’06


Ricardo Aler,

Automatic Design of Algorithms Through Evolution, Automatic
Inductive Programming Tutorial, ICML’06

Related Links:

Automatic Programming Server, Demo Software


CS 394P: Automatic Programming,





3. PHP


Related Links:

PHP Object Generator Blog

2. PHP Object Generator

3. POG, Home Page

4. POG Museum

5. Introduction t