- 1 -
GIS AND THE INTERNET
Compared with many other computer applications, GIS tends to be quite resource-intensive. Its development until
he early- to mid-1990s tended to be confined to fairly large computers (i.e. mainframes and workstations). However,
with the development of more powerful PCs, desktop GIS became a more practical proposition by the mid-1990s.
This opened the benefits of GIS to a much wider (although still technically skilled) user base.
A similar development took place around the same time with regard to the Internet. The growth of the World Wide
Web made the Internet accessible to a much larger number of users. It was only a matter of time before the two
technologies would converge. Further developments, especially with regard to processing power and bandwidth,
resulted in web-based GIS becoming feasible in the late 1990s. This introduced some of the benefits of GIS to a
much wider audience.
It is anticipated that the further diffusion of 3G (third generation) and 4G telecommunications could result in a big
increases in location based services (based on GIS principles) over the next few years.
Today we will look at some of the issues and opportunities associated with combining Internet and GIS technologies.
We will begin by looking at the Internet and World Wide Web in fairly general terms, and then look at web-based
mapping and GIS. We will then look at the implications of the free software movement for GIS. The final section
then assesses some of the opportunities provided by these developments.
THE INTERNET AND WORLD WIDE WEB
The Internet is a network of networks. The internet can be traced back to a network called ARPANET (Advanced
Research Projects Agency Network) which was developed in 1969 to connect US universities, military and defence
contractors. ARPANET initially allowed researchers to run programs on each other’s computers, but file transfers
and email followed soon after. By 1973 other networks had developed, so a programme called the Internetting
Project was established to link these networks together. This was made possible by the development of TCP/IP
(Transmission Control Protocol / Internet Protocol) in 1974.
Other networks developed using different protocols, such as CSNET (developed in the 1970s to connect Computer
Science Departments), USENET (using UUCP – Unix to Unix Copy Protocol), and BITNET (created in the 1980s
to connect US universities using NJE/NJI protocols). These networks were gradually integrated into the Internet
through gateways. By 1985, 100 networks were connected; by 1989 it had risen to 500; and by 1991 it was over
2,000. Now? Who knows?
The World Wide Web
Different types of service on the Internet use different protocols. As late as the early 1990s there was still quite a
steep learning curve for those who wished to use the Internet. This changed almost overnight in 1991 with the
introduction of the World Wide Web.The World Wide Web was developed by Tim Berners Lee at CERN in
Switzerland. Its distinguishing feature was the use of hypertext links (originally developed by Ted Nelson as part of a
project called XANADU) to provide easy access to other documents.
The World Wide Web represented a ‘dumbing down’ of the Internet and made it accessible to anyone who had a
suitable browser. The initial text-only browsers were soon superseded by visual browsers such as NCSA Mosaic.
Microsoft entered the market with Internet Explorer in 1996, whilst Marc Andreesson, one of the NCSA Mosaic
developers, developed Netscape (which in turn spawned Mozilla and Firefox). These transformed the Internet from
being the preserve of a computer literate minority to easy access by the general public. It also became much more
commercialised as new companies sprouted up trying to find new ways to cash in.
Hypertext Markup Language
Traditionally all Web pages were written using HTML (Hypertext Markup Language). The computer hosting the
web pages (i.e. the server) sends the HTML document to the browser (i.e. the client) which then ‘translates’ the
- 2 -
HTML to create the effects intended by the author. Different browsers, however, may interpret the document slightly
A HTML document basically contains the text which is to be displayed on the page with embedded commands in
angled brackets. These commands (or tags) usualy come in pairs: one marks where the command begins (e.g. <I>)
and the other marks where it ends (e.g. </I>). Many tags contain additional qualifying parameters – e.g. <A
HREF="file.htm"> Click here </A>. The words ‘Click here’ would appear on the web page as a hyperlink. If you
click on the link, the browser will open the file ‘file.htm’. Other tags allow you to specify where photos or other
raphics should be placed.
Dynamic Web Pages
The early web pages were static.Although they might be enlivened using animated gifs, the actual content of the
pages was fixed. This was found to be unduly restrictive and so various methods were developed to make pages
dynamic.For example, if a company advertised the prices of its products on a web page, then it would have to edit
the HTML source every time the price of one of its products changed. It is obviously more convenient to store the
price details in a database which can be updated in the normal manner but which can be read by the web server when
it receives a request for the price of an item from a browser. If the information is updated in the database, then the
changes will automatically be reflected on the web pages.
Information input by users can likewise be collected from the client browsers (e.g. using a form or questionnaire) and
saved to a database on the server. Dynamic web pages permit other types of processing. For example, many web
pages ask your opinion on some issue, and then display the results of the poll to date.
There are a number of technologies for providing dynamic content. One major division is whether they entail client-
side or server-side processing. Client-side processing means that the browser does the actual processing. For
example, small programs written in Java called Java applets can be embedded in a web page. If the browser is Java-
enabled (i.e. if it contains a Java virtual machine), then these mini-programs can be run inside the browser. Dynamic
HTML (DHTML) also allows alternative sections of code to be processed depending upon circumstances (e.g. a
response from the user).
In the case of Server-side processing the server does the processing and simply sends the results to the client.
Server-side technologies include SSI (Server Side Includes), ASP (Active Server Pages), JSP (Java Server Pages),
CFML (Cold Fusion Markup Language) and CGI (Common Gateway Interface). SSI allows you to call in other
HTML files. ASP is similar, except the additional sections are written in a scripting language such as VBScript,
rather than handling just the dynamic parts, and may be written in a variety of languages, although PHP (Hypertext
Preprocessor), Perl, Python and Ruby are probably the most popular.
The early dynamic web sites allowed for input from users, but did not permit them to put up information for others to
view. This is sometimes referred to as Web 1.0. However, in recent years more and more sites allow users to upload
their own content (e.g. blogs, social networking, wikis etc.), referred to as Web 2.0.
Given that the links in a HTML document could link to another document anywhere in the world, and given that any
document could be read by a browser anywhere in the world, there is an obvious need for protocols and standards if
computers in different parts of the world are to communicate. There is also a need for these standards and protocols
to be revised to take account of technological advances. The standards and protocols for the World Wide Web are
controlled by a body known as the World Wide Web Consortium (W3C). There is a similar need for agreed
standards in GIS if data produced by one GIS system are to be compatible with data from other systems. This is
controlled by a body known as the Open Geospatial Consortium (OGC). In the US the Federal Geographic Data
Committee (FGDC) was set up in the early 1990s to determine standards, especially with regard to metadata.
HTML 4 is the current standard for HTML promoted by W3C, although the specifications for HTML 5 are
currently at a very advanced stage. HTML 4 attempted to introduce a separation between content and style. The
content is controlled by HTML, whilst it is recommended that the style should be controlled by CSS (Cascading
Style Sheets). CSS allows authors to specify how their pages should look, but their suggestions can be overruled at
local level by the client. By separating the stylistic aspects from the structure, the structural aspects are now more
universal. W3C has suggested that several tags should be ‘deprecated’ (i.e. phased out). Microsoft and Netscape
- 3 -
were both slow to implement the HTML 4.0 standards. Progress is also hindered by the need to retain support for
older commands for backward compatibility. Opera,a Norwegian rival to the big two, was the first to fully support
HTML 4.0. The most recent versions of Firefox (version 1.0 of which was released in 2004) and Google Chrome
(released in 2008) are already largely HTML 5.0 compatible, whilst Internet Explorer 9 (due in 2011) will also be
W3C also approved the XML (Extensible Markup Language) specification in 1998. This was designed to overcome
the deficiencies in HTML. Because HTML is primarily designed for formatting text in web pages, it does not
transfer very readily to other media. XML, in contrast, provides a generalised way to describe the basic structure of
the data, independent of how the data are to be used or presented to users – i.e. the information content is explicitly
eparated from how it is presented. This in turn allows the information to be disseminated to a wide variety of
devices (i.e. in addition to web browsers, it could be used by PDAs (Personal Digital Assistants), cell phones, car
navigation systems, etc.).
Like HTML, XML encloses text in tags, but the tags define data types rather than how the data are to be displayed
(i.e. formatting). XML files are each accompanied by a DTD (Document Type Definition) file which defines the data
types. To be published on the web, the XML files must also be accompanied by style sheets which define how the
data should be displayed. The style sheets may be written in various languages, including CSS and XSL (eXtensible
Stylesheet Language). Internet Explorer 5.0 was the first browser to support XML. This translated the XML code
into HTML which was then displayed using CSS.
W3C also developed XHTML (Extensible HyperText Markup Language). This is a reformulation of HTML 4 in
XML, which ‘combines the strength of HTML 4 with the power of XML’. It was envisaged that XHTML will
eventually supersede HTML.
XML may be thought of as a language for defining other languages. In other words, different interest groups can
define their own specialised data types using the DTD. This, in effect, is what the OGC has done. It has defined
GML (Geographic Markup Language), which is a protocol for encoding geospatial data conforming to XML
standards. GML is vendor neutral and can be used with any kind of geospatial data. It can also be used with almost
any method of processing or display. Each device receiving GML data can process and display the data in its own
way. Ordinary XML-enabled web browsers can display GML maps as maps without too many modifications, but
GML maps could also be displayed by other devices (e.g. mobiles phones) or in other formats (e.g. text). OSGB was
an early adopter of GML for the format to be used for the distribution of its digital data.
WMS (Web Map Service) was adopted by OGC. This specifies the request and response protocols for open web-
based map client/server interaction. WMS allows users to access and combine maps as 'pictures' from multiple map
servers (e.g. different vendors) in a single session. WCS (Web Coverage Service) standard is another OGC standard
which extends WMS principles to coverages (i.e. raster data sets). The WFS (Web Feature Service) standard allows
a client to retrieve and update geospatial data (i.e. vector data) encoded in Geography Markup Language (GML)
from multiple Web Feature Services.
Apart from GML, other XML standards have been devised for general vector graphics (i.e. not necessarily
geographical data) such as SVG (Scalable Vector Graphics), VML (Vector Markup Language) and X3D.To view
the data you require a suitable viewer. Internet Explorer 5.0 upwards can handle VML, Adobe has developed a
browser plug-in for SVG, whilst there are several other SVG viewers available. GML data can be transformed into
each of these formats for display purposes.
In the US the FGDC has devised a set of guidelines for US federal government agencies called the NSDI (National
Spatial Data Infrastructure) initiative. This specifies the type of information that should be included in metadata, and
the format within which it should be presented. This facilitates easier searching of metadatabases. FGDC developed
the NSDI in close co-operation with ISO (the International Organisation for Standardisation), so the NSDI could
become the universal standard. An Irish Spatial Data Infrastructure (ISDI) is currently being developed by a working
group in conjunction with the National Spatial Strategy.
- 4 -
Degrees of Complexity
Web-based mapping merges web technology with GIS technology. Web-based mapping, in its simplest form, may
simply involved serving previously drawn maps as TIFF or JPEG images. However, users are increasingly provided
with more control over what data are to be displayed and how they are to be depicted – i.e. web-based mapping is
being superseded by web-based GIS. One can envisage a progression in terms degrees of complexity:
Static maps – i.e. maps are downloaded as images such as TIFFs or JPEGs. These images may be made more
interactive by embedding ‘hot links’ – i.e. if a user clicks on a particular area in a small scale map it could be
linked to a larger scale image showing more detail. However, the maps are basically static.
Dynamic maps – i.e. the maps are updated on a regular basis with more up to date data (e.g. weather maps,
traffic maps). However, the user does not have any input into what is displayed (except possibly to request an old
Simple interactive maps. These are simple maps drawn in response to a user query. For example, the user might
type in an address and the server would respond by drawing a map centred on the address entered. The map
might display several pre-defined themes, each clipped to the area defined by the user.
User-specified themes.The next level of sophistication is to provide users with control over which themes
should be displayed. The themes are generally selected from a list provided by the server. The process is
analogous is selecting which themes to display by clicking the display boxes in the table of contents in ArcGIS.
Users may also be provided with tools for zooming and panning.
User-added content.In addition to switching pre-defined layers on or off, the next step was to provide facilities
for users to upload new content of their own (e.g. the Google Maps API was introduced in 2005). The new
content may take the form of new layers or it may be other media (e.g. photos, video clips).
Web-based GIS.In this case the user may be allowed not only to select which data should be used to create the
themes, but may be allowed to query the database using GIS tools to query either the attribute data (e.g. display
only features with particular attributes) or spatial data (e.g. buffer zones). Users might also be given more control
over how the features should be rendered (e.g. colours, symbolisation, line thickness, etc.).
The above do not represent discrete steps, but are points along a continuum.
ISSUES FOR DEVELOPERS
Web-based mapping/GIS generally tends to assume only minimal levels of expertise on behalf of the end users.
However, setting up web-based GIS systems can be complicated and is not for the faint hearted. Even using
proprietary server software you may need a degree of expertise in related technologies, such as Cold Fusion
(Autodesk Mapguide), or Active Server Pages (GeoMedia Web Map). You may also need to know how to
administer web servers, create firewalls, set up dynamic web sites, access databases (e.g. ODBC), and work with
unfamiliar formats (e.g. CGM – Computer Graphics Metafile).
There are a number of issues which you need to consider if choosing software to set up a web-based GIS site (apart
from the more obvious issues of cost, compatibility with existing software, etc.):
Server-side or client-side processing? Suppose, for example, the user wishes to zoom in. Server-side
processing would mean the server draws the new map and then sends it to the client, whereas client-side
processing would mean that the client can redraw the map itself using a plug-in or maybe a Java applet
previously downloaded from the server. Server-side processing requires repeated downloads, although each
download will generally be fairly fast. However, if there are a large number of users accessing the server at the
same time, response times may be very slow because the server is tied up processing multiple requests
simultaneously. Client-side processing results in a slower initial download, but response times are subsequently
much faster. Plug-ins are typically about 2Mb and therefore may take a long time to download on slow clients.
Java applets are typically much smaller and therefore download quicker. However, Plug-ins only need to be
downloaded once, whereas Java applets need to be downloaded each time they are used. Many plug-ins and Java
applets now allow ‘smart’ data to be downloaded, providing much more flexibility at the client end.
Do you need to accommodate local data? Several map servers now allow local data (i.e. data provided by the
user) to be brought into the map browser. For example, if a server provides demographic data, it may be
desirable to allow potential customers (e.g. companies) the option of overlaying their own data (e.g. sales
territories). In other instances this may be unnecessary. The choice of software will determine what is or is not
possible (and vice versa).
- 5 -
Do you need to protect data against copying? Most servers allow the user to save the downloaded map.
However, in some instances it may be necessary to prevent the data which is downloaded from being copied (e.g.
t may be protected by copyright). Some servers permit data to be viewed but not saved. Some even encrypt the
data to prevent it being accessed by reverse engineering. Another option is to prevent client copying, but enable
the server to save the data as a zip file which can be FTPed to the client. (This also provides a mechanism for
charging for the data).
Do you need to access live data? There are two main models for web-server authoring. One approach involves
drawing maps to create a library of maps prior to receiving requests; these can then be downloaded as requested.
The other approach is to wait for a request and then draw the map ‘on the fly’ (i.e. when it is requested). The
atter approach allows you to link to live data, so your maps will always be based upon the most up to date
information. This would be important if your system is reporting weather conditions or traffic situations, etc.
However, the former approach allows maps to be served more quickly, and also allows more consideration to be
given to issues such as cartographic clarity.
Does the server need to be able to access attribute data held in a database? Database interaction generally
requires developers to write software to collect the details of the query from an end-user, send it to the database,
receive a reply, convert the reply into HTML (or whatever) and then display the results. This clearly requires
some programming skills. It is also possible to program the server to collect information from the client to be
added to the database (e.g. users could click on a map to indicate where a particular event occurred and then
enter some text to describe the event. This, of course, presupposes the users are ‘map literate’.)
Does the server need to be able to access spatial data held in a database? The major DBMS providers now
support spatial data (e.g. IBM, Informix, Oracle, Sybase). If you wish to access these databases you need to
ensure your have appropriate web-server software. You would also need to decide whether you require direct
access to these objects (which would speed up access) or whether it is sufficient to work through ‘middleware’
(which will simplify software development).
Do you need to access data from different sources? It is generally possible to convert data from different
formats into the format required by the application. However, some servers can access multiple different formats
directly. This is an advantage if you are dealing with live data that is constantly changing.
How many hits do you expect to get? If you expect a large number of visitors to your site, then you obviously
need to ensure that you have sufficient capacity. This means that the hardware should be big enough and fast
enough. However, the software should also be ‘scalable’ – i.e. it should be possible to add more users relatively
painlessly. Scalability is generally easier on software which supports threading – i.e. different aspects of the
mapping process can be separated into threads which can be carried out simultaneously by different processors
(as opposed to the more traditional model where each step has to be carried out in sequence). A second important
consideration is the footprint – i.e. the amount of memory required to run the software and each additional
instance as the server receives more hits. A small footprint allows the server to support more users before it
begins to run out of memory.
How much functionality do you require? The early map servers simply supplied static maps. However, more
and more GIS functionality is now being added (e.g. buffering, point in polygon operations). The choice of
software will be influenced by the amount of functionality you require.
Browser compatibility. If your server is operating on an intranet then you will probably be able to determine
which browsers need to be supported. However, if you are serving maps to the internet then you will need to
consider which browsers (i.e. Internet Explorer, Netscape, Opera, Firefox etc.) and platforms (i.e. Win9x, Win
NT/2000, Windows XP, Window Vista, Windows 7, Mac, Linux, Unix, OS/2 etc.) will need to be supported,
bearing in mind that older versions of browsers do not support as wide a range of functions as the newer
The term ‘neogeography’, although contested, is often used to refer to the provision of simple GIS techniques to the
wider public by applications such as Google Map, Google Earth, Virtual Earth, and Live Search Map. The term
‘collaborative mapping’, used to refer to web maps and user-generated content, is preferred by some. In some
instances the map itself is created collaboratively as a shared surface (e.g. WikiMapia, which allows users to add
their own placenames). In other instances users can add their own layers, for example to show place locations, which
may in turn provide links to other media / information (e.g. Placeopedia, which links Google Maps to Wikipedia
Adding content, as opposed to simply viewing it, may require some basic programming skills, such as a familiarity
PostgreSQL). Google Earth extends many of the principles of Google Maps using its own markup language, KLM
- 6 -
(Keyhole Markup Language), which is partly based on GML. However, one of the big advantages of these
technologies is that they can combine data from different sources, typically a map of locations and locationally-
tagged information, to create a mashup (e.g. Healthmap, at http://www.healthmap.org/, which maps textual
nformation on disease outbreaks from a variety of sources using Google Maps).
Neogeography has tended to attract conflicting responses from professional geographers. Some see it as a negative
development due to the fact that many amateur contributors tend to locate places inaccurately whilst others use it for
mapping self-obsessed trivia; other geographers, whilst recognising these limitations, see neogeography as a positive
evelopment because it has resulted in a spectacular growth in spatial awareness in the general public.
Most of the software for GIS is currently commercially produced by companies such as ESRI, MapInfo, Autodesk,
Intergraph, Leica Geosystems and Clark Labs. However, the Internet provided an opportunity for 'amateur'
programmers scattered around the world to co-operate on projects to develop alternative software, which is then
placed in the public domain to be downloaded for free by anyone interested in using it.
It should be noted that there are two meanings of ‘free’:
Freeware:Free as in ‘free beer’ (gratis) – i.e. no monetary charge. However, the right to modify the code
may be protected under copyright (e.g. ArcReader).
Free Software (Open Source): Free as in ‘free speech’ (libre) – i.e. the software can be used, copied,
studied, modified and redistributed. It is usually protected by a 'copyleft' licence (e.g. GNU GPL, BSD
licence) - these prevent anyone from claiming copyright. Developers and distributors, however, can charge
money / make profits from selling the software (e.g. Red Hat).
It is the second concept of free that is the more important, because it means that the source is always available and
can therefore be improved by others. Given that the source code is freely available (usually by download from the
Internet), free software is usually available without cost, although people may prefer to pay for the convenience of
having it made available on a DVD.
Free GIS software is still in at an early stage, but the free software movement has made dramatic strides in other
areas. Operating Systems provide the best example. Whilst Microsoft has dominated the small computer world with
successive generations of Windows, the big computer world was traditionally dominated by Unix - regarded by most
users as a far superior system. However, commercial Unix operating systems are expensive, so in 1983 a computer
scientist called Richard Stallman initiated the GNU project to build a Unix system using only free software. Much of
this was completed by the early 1990s. The main problem was that Unix only ran on big computers, but in 1991 a
Finnish computer science student called Linus Torvalds released a Unix kernel on the internet that ran on small
computers and invited others to help him improve it. Torvalds' kernel was subsequently modified to utilise the GNU
project tools, resulting in what is now known as Linux (or Linux/GNU). Linux can be run on computers of all sizes.
Although it has made only a limited impact on the small computer market - 2.1% share of the market in 2008 -
largely because of Microsoft's OEM policy, it has made a massive impact on the big computer market - it is
estimated that 74% of the world's 500 largest supercomputers run on Linux, despite the fact that it was developed by
'amateurs' and is available for free. The cost of the software is obviously not a consideration for the owner of a
supercomputer - what makes Linux so attractive is its reliability. Because there are so many amateur programmers
fixing any little problem that may crop up, Linux is now much more reliable than its commercial rivals.
Although Linux provides one of the best examples, other free software projects are now proving to be as good if not
superior to their commercial equivalents (e.g. Apache, OpenOffice.org, R, MySQL, PostGreSQL, Python, Perl, PHP,
Free GIS software is still at an early stage of development, and much of it is a bit rough and ready. However, some
of it is already very good quality. There are two broad categories:
Desktop GIS.There are several free desktop GIS applications, but most are fairly limited in terms of what they can
do. For example, most can display multiple themes (or layers) which can be switched off and on. These themes can
be symbolised. There are usually options for simple GIS operations such as buffering etc. However, they tend to be a
bit more limited with regard to more advanced analytical functions. Most can probably be thought of as viewers
rather than complete GIS systems.
- 7 -
The main exception is a program called GRASS (Geographic Resources Analysis Support System). This is a full
featured GIS, although not noted as being particularly user-friendly. It was initially developed by the US Army
Corps of Engineers to manage US Dept. of Defense properties, but it was subsequently put into the public domain.
raditionally GRASS was only available for Unix, although it could be run on Windows using a Unix emulator.
However, it can be run directly on Windows (since version 6.3, released April 2008).
Another interesting application is Quantum GIS (or QGIS), first released in 2009. This is cross platform (e.g. Unix,
Linux, Mac, Windows) and can handle several ESRI data formats, including shapefiles, coverages and even personal
geodatabases (although not, at present, file geodatabases), plus MapInfo and PostGIS. Web services, including Web
Map Service and Web Feature Service, are also supported to allow use of data from external sources. QGIS already
provides a lot of built-in functionality of its own, but its main strength is that integrates with and can be used as a
graphical interface to GRASS. Users can also extend the functionality by writing plugins in Python.
Web-Based GIS.The free GIS movement has made its biggest impact in web-based applications. Free web-based
GIS software compares favourably with commercial systems. Setting up a web-based GIS, even using commercial
software (e.g. ArcIMS), is a complex process. There are a number of components which need to interact:
File-Based Data Spatial Database
High quality open source software is available for each component: e.g. Firefox (web client), Apache (web server),
UMN Mapserver, GeoServer (Map Servers), MySQL, PostgreSQL/PostGIS (Spatial DBMS), GDAL/OGR (data
conversion tools), PROJ.4 (data projection tools), R and R-spatial (statistical analysis), GRASS (GIS tools) plus a
variety of variety of tools to help build and manage applications (e.g. Zope). Many of the languages used are
themselves open source (e.g. Python).
It remains to be seen how these technologies will eventually be utilised. The following identifies some of the
possible areas of growth.
Internet Web-Mapping is used for a wide variety of different types of application. The servers are typically run by
government departments (central or local) or agencies wishing to provide further information to the public as a
service, or by companies wishing to provide information to generate business. The type of information served is
extremely varied, such as: census and other statistics displayed as maps; tourist information for visitors (e.g. places
to visit, places to stay); job opportunities which can be spatially searched by job seeker; property for sale, searchable
by location and other characteristics (e.g. house size, price range). It is not difficult to find examples. The web sites
- 8 -
of the major proprietary web-mapping software companies (e.g. ESRI, MapInfo, AutoDesk, Intergraph, etc.) contain
pages with links to working examples using their particular software.
Software developments make it possible to integrate data from different sources in large organisations (e.g.
government departments, large companies) without regard to physical location or types of technologies used. System
developers can now build more complex applications. Organisations are beginning to deploy applications that
support enterprise-wide use of spatial information. This often involves middleware that gives diverse spatial client
applications access to general-purpose enterprise database software. System development is often complicated by the
use of multiple types of spatial information required by different users.
At a simpler level, spatial information may be made available to users with an ordinary web browser. This permits
spatial data to be accessed by a larger number of ‘new’ users. Intranet applications may also save money because it is
no longer necessary to have a separate data licence for each machine that accesses the data.
A number of companies now act as data warehouses providing spatial data to potential customers. In most cases the
companies do not actually produce the data themselves, but provide easy access to a catalogue of spatial data
products produced by others. Some geoportals provide data free, and make their money by selling advertising.
However, most charge for the data and pass on a percentage to the original data producers. (The advantage for data
producers is that they do not need to worry about marketing etc. – they simply provide the data and collect the
royalties). Apart from having servers which can quickly display data and process a large number of enquiries at the
same time, geoportals are obviously dependent on sophisticated web technology to ensure secure transactions
between vendors and customers. They also need to be able to track how much commission is payable to each data
Application Service Providers
Instead of downloading data and processing the data using software applications on PCs, it is predicted that many
computer users will use a different approach referred to as the Application Service Provider (ASP) model, especially
with the growth of cloud computing. This model allows users to not only access data via the Web but also the
applications software. The applications may either be run on a remote server or else be download and run transiently
as Java applets on the user's computer. A user may use a single ASP provider who provides a full set of resources, or
the resources may be chained from multiple providers. ASPs are beginning to appear in the geospatial realm. This
may be the model for the future. Users have access to potentially sophisticated geospatial analysis tools, but avoid
the costs of buying, installing and maintaining the software locally. They will also benefit from having ready access
to up-to-date geospatial data without the worry of storing them, backing them up and maintaining their currency.
Location Based Services
Possibly the biggest growth area will be in LBS (Location Based Services) – i.e. information services which are
dependent upon location. For example, cars in many countries are already equipped with GPS. This enables drivers
to type in a destination address to receive instructions on which route to take. Visitors to strange cities can use their
mobile phones to receive a list of restaurants close to their current location, possibly with information about price
ranges, menus, quality ratings, etc. Travellers can get a list of the nearest ATM machines and directions about how to
get there from their current location. Emergency services can get a more accurate spatial fix of people reporting
emergencies. And so forth. The possibilities are endless, especially given the options provided by Web 2.0. It should
be noted that whilst some of this information might be served in the form of maps, XML permits each device to use
whatever medium suits it best (e.g. text, voice messages, etc.).
In each instance the service requires an extensive database containing the information provided by that particular
service, information about the user's current location, and some sort of GIS technology to perform the required
spatial search. The GIS technology to do the spatial searches is already in place, but developments are still required
in other areas to enable LBS to fully take off.
At present LBS is feasible on laptops and other relatively bulky equipment. However, not everyone owns a laptop,
nor do laptop owners particularly want to carry their laptops around everywhere they go looking for a 'hotspot'. The
main spurt in LBS will therefore follow the closer integration of smaller devices, such as mobile phones and PDAs,
- 9 -
with the Internet. This is already happening. LBS should be boosted by the developments such as ‘always on’
internet connections, and larger screens in smart phones and PDAs.
The diffusion of technologies which allow these devices to be treated as reasonably accurate GPS-like devices adds
to the feasibility of LBS. Cellular network operators have always been able to identify the nearest transmitter to a
cell phone. This locates people to within a few kilometres in rural areas and to a few hundred metres in city centres.
However, improved technology now allows people to be located to within 10-25m. US phone carriers have been
required by law to be able to locate mobile phone carriers accurately since October 2001, whilst EU legislation
required mobile operators to incorporate wireless location technology into mobile phones by January 2003.
At the beginning of the decade LBS was predicted to increase at least 100-fold and to be worth about 93 billion
Euros by 2005. This was part of the reason for the mad scramble by telecommunication companies for third
generation (3G) licences. For example, the UMTS (Universal Mobile Telecommunications System) licences in
Germany were auctioned to six successful bidders for 50 billion Euro in 2000. UK licences auctioned for 15 billion
Euros. UMTS would seem destined to become the broadband wireless multi-media system of the future. Vodafone,
O2 and 3 (Hutchinson 3G) got the licences in Ireland.
As in other areas, the success of Internet applications will depend upon the adoption of agreed standards. WAP
(Wireless Application Protocol) was adopted by many vendors. WAP uses its own markup language (WML). The
leading Japanese company in this area (DoCoMo) produced I-mode – a protocol based on compact HTML, but the
rival companies have adopted WAP. There was a poor take-up of WAP in the US, but the FCC (Federal
Communications Commission) issued an order in 2007 to try to improve competition.
The new technologies introduce a number of legal and ethical issues which will need to be addressed. For example,
the ability to locate phone users with a high degree of accuracy could be regarded as an invasion of privacy under
Data Protection legislation.
The take-up in LBS so far has been more subdued than originally predicted. This may be partly due to a wait and see
attitude towards the enabling technology. Also, the collapse of so many dot-com companies in 2000 generated a
more cautious attitude towards investment in internet technologies. Nevertheless, it would appear that LBS has a big
future provided three criteria can be met:
1) The new technologies should be easy to use. They are less likely to catch on with the general public if there is a
steep learning curve.
2) They should provide content – i.e. they should provide genuinely useful information.
3) They should be cheap – i.e. the cost of accessing a LBS should be much the same as making a local call. LBS
companies may need to operate at a loss in the early days (when heavy capital investment will be required to
build the services) in order to reap profits later on. High charges in the early days will discourage the take-up of
Further information may be obtained from the World Wide Web. Links to useful sites are provided in the Links page
in the web site for this module (i.e. http://www.nuim.ie/dpringle/gis