Virtual Archaeology and Artificial Intelligence

spineunkemptΤεχνίτη Νοημοσύνη και Ρομποτική

17 Ιουλ 2012 (πριν από 5 χρόνια και 2 μήνες)

365 εμφανίσεις

In many disciplines data are not easily accessible. An
archaeologist cannot see past social dynamics,and not only
because archaeological data are hidden under the earth,but
because causes and effects were produced many years ago,
and we cannot see now and here either causes or the real
effects. In some cases,the spatial location of causal actions
or process is hidden,whereas in other cases,the temporal
location is beyond our experience. Maybe post-depositional
processes have altered the spatial location of causal actions
or process,whereas,in other cases,we have lost most data.
In fact,we cannot see what it does not exist in the present.
In circumstances when we have not all-relevant information
about a causal process,we can generate simulated data ?to
see what cannot be seen?. Better than a mere analogy with
the real world,we should imagine that the virtual model of
an archaeological entity is a projection from an
archaeological theory,that means one of the possible valid
results from this theory.
That is,we should build a virtual model from partial data
input to represent some (not necessarily all) features of the
archaeological entity we have not observed. A model is then
a knowledge structure produced by some organized
knowledge-base of higher level. We need knowledge ?to
visualise what cannot be seen?. The question is how to add
knowledge in a systematic way.
The archaeological record is always a form of simulated
reality,because we ?complete?it using virtual models. We
should take into account,that a virtual model is not
necessary a surrogate for reality,but any ?interpreted?
representation of partial inputs. As we will see,the process
of ?completion?or ?reconstruction?is analogous to
scientific explanation,and therefore,it involves induction,
deduction and analogy.
Observation is the process by which the human brain
transforms light intensities into mental images,which
explains perceived input. Observation is a 3-stage process:
Our brain receives sensory input and recognises some
information content in it using prior experience. Finally it
describes that information using a specific representation
language. That is to say,we ?see?the real world by creating
a virtual model of the reality. Sensory information comes in
form of light. Our brain processes differences among light
intensities and light sources,and it builds an explanatory
model. We do not ?see?things,but we infer the existence of
things from the spatial regularity arising from the pattern of
luminance contrasts we perceive as sensory input. Any
observation mechanism is then a translation of some
perceptual input into an explanatory model of it. This virtual
model is what we usually call image.
It does not exist anything as an artificial or virtual
observation. Nevertheless,we can use some mechanic
devices for translating perceptual inputs into a geometric
model interpreting luminance contrasts. This process of
modelling is also called ?visualisation?,which should not
be confounded with ?seeing?. We ?visualise?data when we
fit interpreted geometric elements to perceived inputs by
joining recognised points with descriptive lines,fitting
descriptive surfaces to descriptive lines,or ?solidifying?
connected surfaces (GERSHON 1994,GOLDSTEIN 1996). We
create ?geometric models?of archaeological reality in the
same way our brain translates perceptual input into mental
The goal of archaeological visualisation is then to explain
spatial regularity between archaeological inputs. We are
able to ?visualise?archaeological reality,just when we
understand how differential locations and topological
relationships between archaeological entities determine
input information. That means,that the relevant properties
of any archaeological entity vary from one location to
another (either temporal location or spatial location),and
sometimes this variation has some appearance of continuity.
We perceive the spatial features of the archaeological reality
by direct interaction with reality,or using special equipment
for input acquisition:photographs,topographic equipment,
Juan A. Barcel?
Dept. Antropolog?a Social i Prehistoria. Facultat de Lletres
Universitat Aut?noma de Barcelona,SPAIN
In this paper,it is presented a general framework for using Virtual Reality Techniques in the domain of Archaeology. It is argued that
?visualising?it is not the same as ?seeing?,but an inferential process to understand reality. Archaeological reality is most of the times
broken,incomplete or hidden. Visual models are ?interpretations?of available data,and their purpose is to ?simulate?what cannot be
seen. As scientific tools,it should be readily apparent how one gets from the perceived incomplete input to the explanatory model.
remote sensing devices,etc. Again,we are not ?seeing?
objects in the real world or in a picture or drawing,but we
perceive spatial information in form of luminance and
colour patterns. By eye inspection,picturing or remote
sensing we receive a light input and we recognise it in form
of location information,which is translated into a simple
three-dimensional representation schema:
X,Y 2D point co-ordinates:longitude,latitude
(independent variables)
Z height or depth (as dependent variables)
Numeric data refer to a surface measured at points whose
co-ordinates are known. By tracing lines,curves and
surfaces between co-ordinates,we create a geometric
model,that is a virtual explanation of location information.
The resulting model is sometimes called:shape. Shape is a
field for physical exploration:it has not only aesthetic
qualities,nor it is just a pattern of recognition. Shape also is
determining the spatial and thus the material and the
physical qualities of objects.
The key aspect of a geometric shape model is its ?spatial?
nature,and it should be considered as a visual
representation reflecting a spatial decomposition of reality
in geometric ?units?. We use this model to examine if the
characteristics in one location have anything to do with
characteristics in a neighbour location through the
definition of a general model of spatial dependence between
units. What we are looking is whether what happens in one
location (temporal or spatial) is the cause of what happens
in neighbouring locations with the idea that if we can
specify the degree of spatial regularity in a region of this
decomposed space,we can reproduce the whole system.
To represent real entities,we should ?imitate?the real
world,describing an object by more than just shape
properties. Geometric units (points,lines,areas,volumes,
etc.) express position or shape,and retinal properties
(colour,shadow,texture) enhance the marks and may also
carry additional information. We should take into account
?retinal properties?in the geometric model,because each
surface appearance should depend on the types of light
sources illuminating it,its physical properties,and its
position and orientation with respect to the light sources,
viewer and other surfaces. Variation in illumination is a
powerful cue to the 3D structure of an object because it
contributes to determination of which lines or surfaces of
the objects are visible,either from the centre of projection
or along the direction of projection.
To study variation in luminance patterns,we should
consider all the set of characteristics (based on physical
properties) assigned to a surface or volume model. We use
the term shading to describe the process of calculating the
colour of a pixel or area from surface properties and a model
of illumination sources. Texturing is a method of varying the
surface properties from point to point in order to give the
appearance of surface detail that is not actually present in
the geometry of the surface. Texture is usually defined using
six different attributes:coarseness,contrast,directionality,
line-likeness,regularity and roughness. In both cases,the
object properties are expressed as intensity values variation
of colour,light and reflectance over surface. We should
remark that the colour assigned to each pixel in a visible
surface?s projection is a function of the light reflected and
transmitted by the objects,whereas shadow algorithms
determine which surfaces can be ?seen?from the light
source. We call ?rendering?the procedures that assign to the
surfaces of an object their visual physical properties such as
colour and shadow. Rendering modes can be understood as
specialisation of an underlying transport theory model of
light propagation in a participating medium.
Nevertheless,the goal is not to obtain ?well illuminated
models?,but to explain spatial regularity using shape-
enhanced models. The goal of the visual model should not
be ?realism?alone,for the sake of imitation,but in order to
contribute to understanding of input information. By taking
into account global models of illumination for
understanding position and relative location,or including
texture information into the geometrical model,we can
understand geometrical properties which are too abstract to
be easily understood. It is the ability to view from all angles
and distances,under a variety of lighting conditions and
with as many colour controls as possible,which brings
about real information. For instance,changing illumination
and shadowing,we can get shaded relief,slope and aspect
maps,which give clues to investigate surface and
morphological differences,expressed as discontinuities in
topography,in slope and in relief. The shaded relief map is
useful to portray relief differences in hilly and mountainous
areas. Its principle is based on a model of what the terrain
might look like,as illuminated from a lighting source,
seated at any position above the horizon.
This case is just a mere 3D+1D model,where a spatial
variable (texture,colour,etc.) is draped into a 3D model of
shape. The more dependent variables the system has,the
more complete the resulting model is. We are not limited to
4 variables (x,y ,z,w),but we can in fact relate two or more
three-dimensional models (x
). For
instance,we can analyse the dynamics of the interaction
between content (a three-dimensional shape model) and
container (another three-dimensional shape model).
Furthermore,important semantic information necessary to
interpret an image is not represented in single pixels but in
meaningful image objects and their mutual relations. The
basic strategy is to build up a hierarchical network of image
objects,which allows the representation of the image
information content at different resolutions (scales)
simultaneously. By operating on the relations between
networked objects,it is possible to classify local context
information. Beyond the pure spectral information this
?context information?(which often is essential) can be used
together with form and texture features of image objects to
improve understanding.
Building a virtual model is a four-step procedure:data
acquisition,pre-processing,parameter estimation,and
modelling. Different surface parameters should be
estimated,taking into account the geometric relationships
of real 3D points,and how they fit to the modelled surfaces
and the specific shapes of surfaces as well. The problem is
that most of the times data are not easily accessible,because
they cannot be seen.
However,even when sensorial inputs are partial or limited,
the brain builds an image,because it uses prior-knowledge
to reconstruct partial reality. If we cannot see an entity
because it is broken or it is hidden,then the brain fill the
gaps with information that does not proceed from the data
What the brain does using prior knowledge,a computer can
do also. In circumstances when we have not all relevant
information about a causal process,we can generate
simulated data ?to see what cannot be seen?. I?m using here
the term ?simulation?for the process of finding the
parameters necessary to infer values at other locations in a
3D surface from the relationship embedded in the data and
in other information describing the data and their
acquisition. When we do not have enough points,we should
follow a deductive or top-down approach,that is,we create
a hypothetical model,we fit it to the incomplete input data,
and then we use the model to simulate the non-preserved
data. This is a classic syllogism:
Here we are following the rule:?The most similar is taken
for the complete simulation?. The procedure is as follows:
we transform perceived data as a sequence of points,and we
try to interpret the type of shape,assuming some dependent
preference function. Once the type is decided,the closest fit
is determined using different numerical techniques.
The alternative way to completion is exactly the opposite.
Instead of selecting the more ?similar?model that fits
available data,we can deform a model we have selected
because it is a valid deduction from prior knowledge,until
it fits the known data points. Since preserved data are not
arbitrary,a generic model having a known shape is a logical
starting point for any fitting process. Pertinent features
within the data are incorporated into the basic structure of
the model. The model is then deformed to fit the unique
characteristics of each data set.
We need to build the model first,and then use it for
simulating the real object. That means,we should create a
geometric model of the interpreted reality,and then use
information deduced from the model when available data fit
the model. In most cases,we create ?theoretical?or
?simulated?geometric models. Here ?theory?means
general knowledge about the most probable ?shape?of the
object to be simulated or prior knowledge of the reality to
be simulated. The question is how to add knowledge in a
systematic way.
In general terms,we have two approaches,depending on the
nature of theory and prior knowledge. If all we know to
simulate missing data,are analogies and some other
?similar?cases,then we should build a qualitative model.
But,if we can calculate missing information from nearest
neighbour points,then completion is the task of surface
interpolation; reality is simulated as an interpolated
parametric surface fitting all known points.
This is the case of ancient buildings. In most cases,
preserved remains do not shed light on the structure of
vertical walls,which therefore remain unknown.
Archaeological or art history background information is
then needed. In the Dresden Frauenkirche project (COLLINS
1993,COLLINS et al. 1993),detailed architectural drawings
and old photographs displaying the church in its original
aspect have been preserved. When existing information was
not available in the preserved input data,photographs from
contemporary churches had to be used. A similar approach
was used for the 3D reconstruction of Maltese burials.
CHALMERS and STODDART (see CHALMERS et al. 1995,
CHALMERS and STODDART 1996,CHALMERS et al. 1997) had
a complete topographic and photogrammetric survey in
which accurate watercolours of the monuments by
nineteenth-century artists were stretched to fit the real data.
In general,the reconstruction of the most archaeologically
bad preserved ancient buildings is largely based on these
types of sources:
a) Pictorial evidence from plans and photographs of the
building?s ruins.
b) Descriptive accounts by modern authors on the ruins
in both their existing condition and in their imagined
original state.
c) Evidence shown by contemporary buildings in other
neighbouring places or culturally related areas,
which gives clues as to likely construction methods
and spatial forms.
d) When old drawings and photographs are not
available,external data can be estimated from
ethnographic records.
Many other examples of integrating historical and
anthropological information to simulate archaeological data
and building ?complete?models of ancient buildings appear
also in BARCEL?et al. (2000).
The problem in all those cases is that theoretical knowledge
is not being added to the model in a systematic way. The
creator of the model is selecting additional information in a
subjective way,using what he/she wants,and not what it
really needs. For years,artists have collaborated with
archaeologists in order to ?reconstruct?all those wonderful
things not preserved in the archaeological record,and they
have provided archaeologists with artistic depictions of the
past. However,these ?illustrations?of the past are not an
explicative vision of anything. When the artist represents
what cannot be seen,then the artist uses his/her
imagination,or partial information provided by an
archaeologist,to create the images. The resulting item is not
an explanation of the past,but a personal and subjective way
of ?seeing?it.
We can use Expert Systems to integrate external knowledge
to partial input,and then simulating the missing parts of the
input (DURKIN 1994,BARCEL?1996a,FELTOVICH et al.
Every expert system consists of two principal parts:the
knowledge base; and the reasoning,or inference,engine. In
our case,the knowledge base contains both factual and
heuristic knowledge about how to complete the model.
Factual knowledge is that knowledge extracted from
historical and anthropological sources that is widely shared,
and commonly agreed upon by those knowledgeable in the
particular field. Heuristic knowledge is the more
experiential,more judgmental knowledge of performance.
In contrast to factual knowledge,heuristic knowledge
underlies the ?art of good guessing,of good practice,good
judgement,and plausible reasoning?. That is to say,the way
we use factual knowledge in order to simulate reality.
Knowledge representation formalizes and organizes the
knowledge. One widely used representation is the
production rule,or simply rule. A rule consists of an IF part
and a THEN part (also called a condition and an action). The
IF part lists a set of conditions in some logical combination.
The piece of knowledge represented by the production rule
is relevant to the line of reasoning being developed if the IF
part of the rule is satisfied; consequently,the THEN part can
be concluded,or its problem-solving action taken. Expert
systems whose knowledge is represented in rule form are
called rule-based systems. In our case,the IF part contains
available data,that is,partial or incomplete input. The THEN
part is the factual knowledge extracted from old
photographs,historical sources,analogies or ethnographic
description. The rule is a piece of heuristic knowledge
linking two bits of factual knowledge:
Of course,the most obvious problem is how to represent
factual knowledge to be used in such a way. We can use a
representation framework,called frame,schema,or list
structure,which is an assemblage of associated knowledge
about an entity to be represented. Typically,a unit consists
of a list of properties of the entity and associated values for
those properties. Since every task domain consists of many
entities that stand in various relations,the properties can
also be used to specify relations,and the values of these
properties are the names of other units that are linked
according to the relations. One frame can also represent
knowledge that is a ?special case?of another unit,or some
units can be ?parts of?another unit. In fact,frames and
properties are nothing more than verbal labels,but we can
link any property to an algorithm or command able to
execute some computer action (drawing a line,interpolating
a surface,adding a texture,etc.). For instance:
Heuristic knowledge organises and controls factual
knowledge. One common but powerful paradigm involves
chaining of IF-THEN rules to form a line of reasoning. For
If the geometric model of (x) has geometric
properties A,B,C
THEN (x) is an example of MODEL ABC
If (x) is an example of MODEL ABC
AND (x) has not property D
THEN JOIN property D to the geometric model of
where JOIN is an operator implemented as a command able
to add some geometric unit to those already present in a
preliminary model of the partial input.
Each rule should be understood as a knowledge unit about
how to use a specific piece of information. If the chaining
starts from a set of conditions and moves toward some
conclusion,the method is called forward chaining. If the
conclusion is known (for example,a goal to be achieved)
but the path to that conclusion is not known,then reasoning
backwards is called for,and the method is backward
chaining. These problem-solving methods are built into
program modules called inference engines or inference
procedures that manipulate and use knowledge in the
knowledge base to form a line of reasoning.
This representation of heuristic knowledge eliminates flow
charting by repeatedly:
?determining the set of applicable rules
?selecting a rule to be applied
?executing the actions (the then part) of the selected rule
Knowledge is almost always incomplete and uncertain. To
deal with uncertain knowledge,a rule may have associated
with it a confidence factor or a weight. The set of methods
for using uncertain knowledge in combination with uncertain
data in the reasoning process is called reasoning with
uncertainty. An important subclass of methods for reasoning
with uncertainty is called ?fuzzy logic,?and the systems that
use them are known as ?fuzzy systems?. For instance:
If the geometric model of (x) has geometric properties
A,B,C but not properties D,E
THEN (x) is an example of MODEL ABC (with
probability 0.7)
If the geometric model of (x) has geometric properties
THEN (x) is an example of MODEL ABC (with
probability 1.0)
THEN VISUALISE the incomplete parts of (x) using
ABC properties
Because an expert system uses uncertain or heuristic
knowledge (as we humans do) its credibility is often in
question,as is the case with humans. When an answer to a
problem is questionable,we tend to want to know the
rationale. If the rationale seems plausible,we tend to believe
the answer. So it is with expert systems.
One of the main examples of using expert systems for the
simulation of archaeological missing data is the estimation
of the general shape of a building by OZAWA (1992,1996).
The geometric model was based on a contour map of
keyhole tomb mounds of ancient Japan. When
archaeological information is not enough to produce the
contour map,an expert system creates an estimated contour
map of the original tomb mound in co-operation with
archaeologists. The expert system holds the statistical
knowledge for classifying any tomb into its likeliest type
and the geometrical knowledge for drawing contour lines of
the tomb mound. The user for each contour map introduces
shape parameters,and the system classifies the mound as
one of the seven types,according to specific parameters
(diameter,length,weight,height,etc.). The estimated shape
layout is then used as input for the 3D solid modelling and
rendering (OZAWA 1992).
FLORENZANO et al. (1999) give a further advance in this
artificial intelligence approach. They use an Object-
Oriented Knowledge-Base containing a theoretical model of
existing architecture. They have chosen classical
architecture as the first field of experiment of the process.
This architecture can be modelled easily enough. The
proportion ratios linking the diverse parts of architectural
entities to the module allows a simple description of each
entity?s morphology. The main hypothesis of this research is
about comparing the theoretical model of the building to the
incomplete input data (preserved remains) acquired by
photogrammetry. Abstract models are organised with the
aim of isolating elementary entities that share common
morphological characteristics and function,on which rules
of composition can be used to re-order the building. The
concept of architectural entity gathers in a single class the
architectural data describing the entity,the interface with
survey mechanisms and the representation methods. Each
architectural entity,each element of the predefined
architectural corpus,is therefore described through
geometrical primitives corresponding to its morphological
characteristics:a redundant number of measurable
geometrical primitives are added to each entity?s definition,
as previously mentioned.
1.Splitting of the object into a cloud of points measured on
its surface.
2.Linking of the points to architectural entities.
3.Data processing.
4.Definition of geometrical models reconstructed on these
5.Definition of the architectural model,which is informed
by the geometrical model.
6.Consistency-making on the whole set of entities.
LEWIS and S?GUIN (1998) give another approach to building
reconstruction. They have created the Building Model
Generator (BMG) which accepts 2D floor plans in a common
DXF geometry description format. The program first
converts these plans into a suitable internal data structure
that permits,not only efficient geometric manipulation and
analysis,but also the integration of non-geometrical data,as
the definition and identity of all rooms,doors,windows,
columns,etc. It then corrects small local geometrical
inconsistencies and makes the necessary adjustments to
obtain a consistent layout topology. This clean floor plan is
then analysed to extract semantic information (room
identities,connecting portals,the function of columns or
arches,etc.). With that information the pure walls are
extruded to a specified (by the user) height,and door,
window and ceiling geometries are inserted where
appropriate. This generates a 3D representation of the
building shell,which can then be visualised and some local
adjustment on parts of the building or material properties
can be made with an interactive editor.
Archaeological structures can be reconstructed from aerial
images,the initial body of a structure (building box) derives
from known structures footprints from a multi-purpose
digital map,this is being extended in the vertical direction
up to the archaeological entity. Detected elements are then
phototextured from aerial images,the building box is
phototextured after improvements with automated
measurements of archaeological remains from field level
photography. As an input dataset for the archaeological
reconstruction not only aerial images with known
orientation parameters can be used,but also digital elevation
models (DEM) and GIS-data (structures footprints at the
ground level and approximate elevation of the structure
contour from the multi-purpose digital map). The idea is to
obtain first an estimate of the archaeological entity
boundaries in the image plane using the GIS data,then to add
detected lines coming from an image segmentation with the
algorithm proposed by Burns,Hanson & Riseman. The
detected entity boundaries generate a 2D archaeological
skeleton (monocular) creating a correct topological
description of the archaeological elements polygons for an
automated production of the 3D archaeological skeleton.
Then the reconstruction of 3D archaeological detail takes
place,using external knowledge (historical sources or
ethnographical analogy). Finally it is needed to obtain the
phototexture from aerial images with consideration of
correspondence between geometric and texture detail. This
serves to relax the requirements for the 3D archaeological
skeleton. The proposed approach has been used at the
University of Graz (Austria) for roof building
reconstruction,and promises automation and speed in the
detection and reconstruction of the roofs using the GIS data.
It supports a feedback between 3D models of buildings and
the GIS,preserves the correspondence between geometric
and texture detail and creates parametric models for future
An interesting future development is the possibility of using
visualisations in a case-based reasoning framework (FOLEY
and RIBARSKY 1994). The fundamental strategy is to
organise a large collection of existing visualisations as cases
and to design new visualisations by adapting and combining
the past cases. New problems in the case-based approach
are solved by adapting the solutions to similar problems
encountered in the past. The important issues in building a
case-based visualisation advisor are developing a large
library of visualisations,developing an indexing scheme to
access relevant cases,and determining a closeness metric to
find the best matches from the case library.
Sometimes,the archaeological record to be visualised
cannot be seen because most of it is hidden,or we have only
some partial information about its physical location and
properties. In this case,fragmented data are represented as
scattered x,y,z input data sampled at irregular locations.
The fragmented spatial information available must be
extrapolated to complete closed surfaces. So,the
reconstruction of a given object or a given building structure
as an architectural frame is a generalization of fragmented
observable data by mathematical object description.
The procedure may be illustrated by the mathematical ovoid
and the eggshell compared. The eggshell is a solid formed
by a fine closed surface. Continuity and dynamics are bound
to the shape of the eggshell,in such a way that it is possible
to locate the fragments of a broken eggshell as well as to
define the whole by only very few spatial measurements.
Evidently,to model the physics of an eggshell,it is
sufficient to pick from the fragments of a broken eggshell
some spatial world data to simulate the entire eggshell. The
spatial continuity and dynamics of the ovoid is included in
the mathematical description,to simulate the missing
information. The algorithm for the mathematical ovoid
serves as a generalized constructive solid geometry,and just
some additional information will tell the specification and
the modification of the individual eggshell,its capacity and
the location of the centre of gravity. This kind of fact-based
solid simulation by mathematical guidelines is including the
physical measurement of a shell,just as a recursive
calculation (STECKNER 1993,1996). In other words,we
should create a geometric model (the mathematical ovoid)
of the interpreted reality,and then use information deduced
from the model to fit the partially observed reality.
The idea is very similar to the previous one,but instead of a
qualitative model we have geometric models. Several
measurements ? like volume,width,maximal perimeter,etc
? are computed from observable data. Comparing the actual
measurements or interpolated surface with the parameters
and surfaces defining the theoretical model makes
simulation possible. In this case,prior knowledge can be
represented in terms of simple geometrical models,and we
can still follow the general rule:?The most similar is taken
for the complete simulation?.
For example,consider the case where we have not all-
relevant 3D information for a shape model,but a series of
2D sections,irregularly sampled over a 2D area. This is a
very common situation in geology,archaeology and in all
disciplines using computer tomography scanners.
The purpose is to generalize 2D sampled data into a
homogenous 3D model. A grid,which can be envisioned,
represents an interpolated parametric surface,as two
orthogonal sets of parallel,equally spaced lines representing
the co-ordinate system. The points where grid lines intersect
are called grid nodes. Values of the surface must be known or
estimated at each of these nodes using ?gridding?techniques.
The first step is to extend the 2D sections normally,in such a
way that different samples meet at common planes. The
method begins with a rough surface interpolating only
boundary points,and in successive steps,refines those points
(and the resulting surface) by adding the maximum error
point at a time until the desired approximation accuracy is
reached. (WATSON 1992,HOULDING 1994,PARK and KIM
1995,ALGORRI and SCHMITT 1996,EGGLI et al. 1996,MA and
HE 1998,PIEGL and TILLER 1999).
In the previous case,fragmented data were represented as
scattered x,y,z input data sampled at irregular locations. In
other cases,we do not have an irregularly sampled surface,
but an interrupted surface. In those cases,we should add
new geometrical information,instead of merely calculating
missing information from nearest neighbour points. This is
the situation in pottery analysis,when we try to reconstruct
the shape of the vessel from the preserved sherds. STECKNER
(2000) uses simple interpolation to solve the same problem.
Here,a surface is interpolated on some points sampled
along the contour of the sherd. Several measurements ? like
volume,width,maximal perimeter,etc ? are computed from
sherd data (contour). Comparing the actual contour or
interpolated surface with the contour lines and surfaces
already computed for complete vessels makes
reconstructions of pots from sherds. The most similar is
taken for the complete reconstruction and classification (see
also STECKNER & STECKNER 1987). A similar approach has
been developed in the qualitative case by BARCEL?(1996b)
using a fuzzy logic approach to compute the similarity
between the sherd information and the complete vase
already known. A Generalized Hough transformation,
instead of surface interpolation,has been used by DURHAM,
LEWIS and SHENNAN (1993). ROWNER (1993) uses a similar
approach for lithic analysis (projectile points). Alternatively,
contour reconstruction can be computed from interpoint
distances. BERGER et al. (1999) presents an algorithm for
doing this task,even when the precise location of each point
is uncertain.
A neural network (see BARCEL?1993,1996a,GU and YAN
1995) can be used also to reconstruct a surface. A neural
network (NN) is a system composed of many simple
processing elements operating in parallel whose function is
determined by network structure,connection strengths,and
the processing performed at computing elements or nodes.
(Definition by the DARPA Neural Network Study 1988,
AFCEA International Press:60). That is to say,an NN is a
network of many simple processors (?units?),each possibly
having a small amount of local memory. Communication
channels (?connections?) which usually carry numeric (as
opposed to symbolic) data,encoded by any of various
means connect the units. The units operate only on their
local data and on the inputs they receive via the connections.
The restriction to local operations is often relaxed during
training. Most NNs have some sort of ?training?rule
whereby the weights of connections are adjusted on the
basis of data. In other words,NNs ?learn?from examples
and exhibit some capability for generalization beyond the
training data.
During learning,the outputs of a supervised neural net come
to approximate the target values given the inputs in the
training set. This ability may be useful in itself,but more
often the purpose of using a neural net is to generalize ? i.e.,
to have the outputs of the net approximate target values
given inputs that are not in the training set. Generalization
is not always possible. There are two conditions that are
typically necessary (although not sufficient) for good
The first necessary condition is that the function you are
trying to learn (that relates inputs to correct outputs) be,in
some sense,smooth. In other words,a small change in the
inputs should,most of the time,produce a small change in
the outputs. For continuous inputs and targets,smoothness
of the function implies continuity and restrictions on the
first derivative over most of the input space. Some neural
nets can learn discontinuities as long as the function
consists of a finite number of continuous pieces. Very no
smooth functions such as those produced by pseudo-random
number generators and neural nets cannot generalize
encryption algorithms. Often a nonlinear transformation of
the input space can increase the smoothness of the function
and improve generalization.
In practice,NNs are especially useful for simulating missing
data in an incomplete geometric model of archaeological
entities. These algorithms suppose a way of classification
and function approximation/mapping problems which are
tolerant of some imprecision,which have lots of training
data available,but to which hard and fast rules (such as
those that might be used in an expert system) cannot easily
be applied. The Neural Network is trained using ?complete
geometric models?(real objects). Then,given a partially
damaged input (incomplete surface),the network is able to
generalize the model and it generates those points that were
not available. That is to say,the computer program
?remembers?when it retrieves previously stored
information in response to associated data. If you have an
adequate sample for your training set,every case in the
population will be close to a sufficient number of training
cases. Hence,under these conditions and with proper
training,a neural net will be able to generalize reliably to
the population.
If you have more information about the function,you can
often take advantage of this information by placing
constraints on the network. Among the constraints,there are
geometric constraints (related to shape) and feature-
extrinsic constraints (ALGORRI and SCHMITT 1996,LEWIS
and S?GUIN 1998,WERGHI et al.,1999). This is an
alternative approach to missing data simulation. Since
preserved data are not arbitrary,a generic model having a
known shape is a logical starting point for a curve or surface
fitting process. Pertinent features within the data are
incorporated into the basic structure of the model. The
model is then deformed to fit the unique characteristics of
each data set (DOBSON et al. 1995).
TSINGOS et al (1995) use a modification of this approach:
implicit iso-surfaces generated by a skeleton for shape
reconstruction. An initial skeleton is positioned at the center
of mass of the data points,and divided until the process
reaches a correct approximation level. Local control for the
reconstructed shape is possible through a local field
function,which enables the definition of local energy terms
associated with each skeleton. The method works as a semi-
automatic process:the user can visualize the data,initially
position some skeleton thanks to an interactive implicit
surfaces editor,and further optimize the process by
specifying several ?reconstruction windows?,that slightly
overlap,and where surface reconstruction follows a local
criterion. THALMANN et al. (1995) use a similar approach for
reconstructing the Xian Terra-cotta Soldiers. A geometric
model of these Chinese sculptures is produced through a
method similar to modeling of clay,which essentially
consists of adding or eliminating parts of material and
turning the object around when the main shape has been set
up. They use a sphere as a starting point for the heads of
soldiers,and they add or remove polygons according to the
details needed and apply local deformations to alter the
shape. This process helps the user towards a better
understanding about the final proportions of a human?s
head. Scaling deformations were first applied to the sphere
to give an egg shape aspect,then various regions selected
with triangles were moved by translation. At this point
vertices were selected one by one and then lifted to the
desired locations. The modeling of different regions was
started,sculpting and pushing back and forth vertices and
regions to make the nose,jaws,eyes and various landmarks.
Using a similar approach,ATTARDI et al. (2000) use a
distortion (warping) of the 3D model of a reference scanned
head,until its hard tissues match those of the scanned data.
The subsequent stage is the construction of the hybrid
model composed by the hard tissues of the mummy plus the
soft ones of the reference head. Another example of warping
to reconstruction is BROGNI et al (2000).
In all these approaches,we have been using general models
and particular constraints as mechanisms for modifying a
preliminary hypothetical geometrical model of a
?complete?reality into another that simulate the missing
parts by satisfying constraints. Finding the geometric
configurations that satisfy the constraints is the crucial
issue. We have examined two strategies:
a) The use of specific values of the constraints and looks
for geometric configurations satisfying these
b) The user investigates first whether the geometric
elements could be placed given the constraints
independently of their values.
Among the constraints,there are geometric constraints
(related to shape) and feature-extrinsic constraints.
However,we should take into account that the world is not
made of images,but it is a series of perceptual information
waiting for an observer that imposes order by recognising
an object and by describing it. Visual models are only a
spatial pattern of luminance contrasts that explains how the
light is reflected,and we use them as a ?virtual?model of
something that does not exist,that cannot be seen.
A description of what cannot be seen is not an explanation
of reality?s missing parts; it is only a part of the explanatory
process. I?m suggesting using VR techniques not only for
description,but also for building all the explanatory
process,from data acquisition to understanding. An
explanation can be presented as a visual model,that is as a
virtual dynamic environment,where the user ask questions
in the same way a scientist use a theory to understand the
empirical world. A virtual world should be,then a model,a
set of concepts,laws,tested hypotheses and hypotheses
waiting for testing. If in standard theories,concepts are
expressed linguistically or mathematically,in virtual
environments,theories are expressed computationally,by
using images and rendering effects. Nothing should be
wrong or ?imaginary?in a virtual reconstruction,but should
follow what we know,be dynamical,and be interactively
modifiable. A virtual experience is then a way of studying a
geometrical model ? a scientific theory expressed with a
geometric language ? instead of studying empirical reality.
As such it should be related with work on the empirical
reality (excavation,laboratory analysis). As a result we can
act virtually with inaccessible realities through their
ALGORRI,M.E.,SCHMITT,F.,1996. Surface Reconstruction from
Unstructured 3D data. Computer Graphics Forum 15 (1):47-
F.,2000. 3D Facial Reconstruction And Visualization of
Ancient Egyptian Mummies using Spiral CT Data Soft Tissue
Reconstruction and Texture Application. In BARCEL?et al.
BARCEL?,J.A.,1993. Seriaci?n de datos incompletos o ambiguos:una
aplicaci?n arqueol?gica de las redes neuronales. In L.ValdØs,
I. Arenal and I. Pujana (eds.) Aplicaciones InformÆticas en
Arqueolog?a:Teor?as y Sistemas. Vol. 2,Bilbao:Denboraren
BARCEL?,J.A. 1996a. Arqueolog?a AutomÆtica. Inteligencia Artificial en
Arqueolog?a. Sabadell:Ed. Ausa (Cuadernos de Arqueolog?a
MediterrÆnea,No. 2).
BARCEL?,J.A.,1996b. Heuristic Classification and Fuzzy Sets. New Tools
for archaeological typologies Analecta Praehistorica Leidensia.
BARCEL?,J.A,FORTE,M. and SANDERS,D.,(eds.) 2000. Virtual Reality in
Archaeology,Oxford:Archaeopress (BAR Int. Series 843).
BERGER,B.,KLEINBERG,R.and LEIGHTON,T.,1999. Reconstructing a
Three-Dimensional Model with arbitrary erors Journal of the
Association of Computer Machinery 46 (2),212-235.
Interactive system for the presentation of a virtual Egyptian
flute in a real museum. In BARCEL?et al. 2000.
interactive visualisation system for archaeological sites. In J.
Huggett and N. Ryan (eds.) Computer Applications in
Archaeology 1994. (BAR Int. Series 600). Oxford:
CHALMERS,A.G. and STODDART,S.K.F. 1996. Photo-realistic graphics for
visualising archaeological site reconstructions. In T. Higgins,
P. Main and J. Lang,(eds.) Imaging the past. British Museum
Occasional Papers 114,85-94.
Interactive Photo-Realistic Visualisation System for
Archaeological Sites
COLLINS,B.,1993. From Ruins to Reality:the Dresden Frauenkirche. In IEEE
Computer Graphics and Applications 13(6),13-15.
and PAFFENHOLZ,A.,1993. The Desden Frauenkirche-
rebuilding the Past. In J. Wilcock and K. Lockyear (eds.)
Computer Applications in Archaeology 1993 (BAR Int. Series
598) Oxford:Archaeopress,19-24.
based models for anatomical data fitting. Computer aided
design 27 (2),139-145.
DURHAM,P.,LEWIS,P. and SHENNAN,S.,1993. Artefact matching and
retrieval using the Generalised Hough Transform. In J. Wilcock
and K. Lockyear (eds.) Computer Applications in Archaeology
1993.(BAR Int. Series 598) Oxford:Archaeopress,25-30.
DURKIN,J.,1994. Expert Systems :Design and Development. New York:
Maxwell Macmillan International.
EGGLI,L.,HSU,C.,BR?DERLIN,B.D. and ELBER,G.,1996. Inferring 3D
models from free hand sketches and constraints. Computer
aided design 29 (2):101-112.
FELTOVICH,P.J.,FORD,K.M. and HOFFMAN,R.R.,(eds.). 1997. Expertise in
Context:Human and Machine. Menlo Park,CA. and
Cambridge,MA:AAAI Press/MIT Press.
FLORENZANO,M.J.,BLAISE,J.Y. and DRAP. P.,1999. PAROS. Close range
photogrametry and architectural models. In L. Dingwall,S.
Exon,V. Gaffney,S. Laflin and M. Van Leusen (eds.)
Archaeology in the age of the Internet.CAA 1997. (BAR Int.
Series,750). Oxford:Archaeopress.
FOLEY,J. and RIBARSKY,B.,1994. Next-generation data visualisation tools.
In L. Rosenblum et al. (eds.) Scientific Visualisation. Advances
and Challenges. New York:Academic Press,103-127.
GERSHON,N.,1994. From perception to visualisation. In L. Rosenblum et
al. (eds.) Scientific Visualisation. Advances and Challenges.
New York:Academic Press,129-139.
GOLDSTEIN,L.,1996. Representation and geometrical methods of problem-
solving. In D. Peterson (ed.) Forms of Representation Exeter:
Intellect Books.
GU,P.,YAN,X.,1995. Neural network approach to the reconstruction of
freeform surfaces for reverse engineering.Computer aided
design 27 (1),59-64.
HOULDING,S.W.,1994. 3D Geoscience Modelling. Computer Techniques
for Geological Characterization. New York:Springer-Verlag.
LEWIS,R. and S?GUIN,C.,1998. Generation of 3D building models from 2D
architectural plans.Computer aided design 30 (10),765-769.
LIEBOWITZ,J.,1997. Handbook of Applied Expert Systems. Boca Raton,FL:
CRC Press.
MA,W. and HE,P.,1998. B-spline surface local updating with unorganized
points. Computer aided design,30,853-862.
OZAWA,K.,1992. Reconstruction of Japanese ancient tombs and villages.
In J. Andresen,T. Madsen and I. Scollar (eds.) Computing the
Past. Aarhus (DK):Aarhus University Press,415-423.
OZAWA,K.,1996. ASM:An Ancient Scenery Modeller. In T. Higgins,P.
Main and J. Lang. (eds.) Electronic Imaging and Computer
Graphics in museums and archaeology. British Museum
Occasional Paper,114.
PARK,H. and KIM,K.,1995. An adaptive method for smooth surface
approximation to scattered 3D points.Computer aided design
PIEGL,L.A. and TILLER,W.,1999. Computing offsets of NURBS curves and
surfaces.Computer aided design 31,147-156.
ROWNER,I.,1993. Complex measurements made easy:morphometric
analysis of artefacts using Expert Vision Systems. In J. Wilcock
and K. Lockyear (eds.) Computer Applications in Archaeology
1993,(BAR Int. Series 598) Oxford:Archaeopress,31-37.
STECKNER,C.,1993. Quantitative methods with Qualitative results in
Expert System. Physical Qualities in Historical Shape design.
In L. ValdØs,I. Arenal and I. Pujana (eds.) Aplicaciones
InformÆticas en Arqueolog?a:Teor?as y Sistemas vol. 2. Bilbao:
Denboraren Argia,486-499.
STECKNER,C.,1996. Archaeological building reconstruction and the
physical analysis of excavation documents Archeologia e
Calcolatori 7 (2),923-938.
STECKNER,C.,2000. Form and Fabric,The Real and the Virtual:Roman
Economy-related geometrical mass constraints in Dressel?s
Table of Amphora Forms. In BARCEL?et al. 2000.
STECKNER,C. and STECKNER,C.,1987. SAMOS. Statistical Analysis of
mathematical Object Structure.Bolletino d?Informazioni 8 (1),
and SHEN,J.,1995. The Making of the Xian Terra-cotta
Soldiers. In R.A. Earnshaw and J.A. Vince (eds.) Computer
Graphics. Developments in Virtual Environments.London:
Academic Press. 281-295.
TSINGOS,N.,BITTAR,E. and GASCUEL,M.P.,1995. Implicit surfaces for
Semi-automatic medical Organ Reconstruction. In R.A.
Earnshaw and J.A. Vince (eds.) Computer Graphics.
Developments in Virtual Environments. London:Academic
WATSON,D.F.,1992. Contouring. A guide to the analysis and display of
spatial data. London:Pergamon Press.
reconstruction by incorporating geometric constraints in
reverse engineering Computer aided design 31,363-399.