A Call for Outlines: EPSRC-DSTL Call in Signal Processing - Technical Specification

agerasiaetherealΤεχνίτη Νοημοσύνη και Ρομποτική

24 Νοε 2013 (πριν από 3 χρόνια και 6 μήνες)

120 εμφανίσεις

A Call for Outlines:

DSTL Call in

Technical Specification

Closing date for outlines:
29 October 20


The Engineering and Physical Sciences Research Council (EPSRC) and the Defence Science
and Technology Laboratory (DSTL) have formed a strategic partnership to
fund novel
research in signal processing.

EPSRC and DSTL are launching a Call for Ou
tline proposals
, to
address research challenges in the area of signal processing. It is expected that many
academic disciplines will have research ideas to contribute to this call.
Up to £2 million is
available for this call. We envisage a mix of short ter
m proposals and PhD project based
programmes to be successful in this call. Successful proposals will make up the ‘open’ aspect
of the DSTL
University Defence Research Centre (UDRC) on Signal Processing.

This document contains the following information:


Links & Collaboration

Research Requirements


Who can
ow to

Selection Procedure for Outline Proposals




Signal processing is fundamental to the capability of all modern sensor/weapon

The Defence Technology Strategy

has identified the development and application of signal
processing techniques as a “very high priority within the MOD research programme”.

In order to maintain the UK’s current strong position in signal process
ing, a national, virtual
research centre on signal processing


formed. This research centre will be known as
the University Defence Research Centre (UDRC) on Signal Processing and aims to develop a
skills base in the UK in signal processing with a

world class profile,
form a key
component of the wider Community of Practice in signal and data processing.

These objectives will be reached through the creation of a structured, multi
research programme involving PhD students, academic and D
stl staff with a strong emphasis


Defence Technology Strategy for the demands of the 21

Century. MOD Science | Innovation | Technology. 17
October 2006.

on col
laboration across the strands.
This technical specification is for


component in
the UDRC, which

will consist of a jointly funded (50%

and 50% EPSRC), programme of
research, where consortium members are cho
sen by fair and open competition. Open
research will be at the unclassified level and therefore be releasable

as PhD theses or open
literature publications

Links and collaboration

The UDRC on Signal Processing will form part of a wider Community of Prac
tice (CoP) in
signal and data processing that embraces academia, Research and Technology

(RTOs), defence manufacturing industries and the Defence Technology Centres to support a
cutting edge signal and data processing capability in the UK. T
he UDRC will form the first
step towards the Community of Practice and will host key CoP
, such as a

website and a

Signal and Da
ta Processing Annual Conference

Upon establishment of the
UDRC, the Community of Practice will be used as a foru
m to seek areas of potential
collaboration with wider industry.


annual conference will provide a forum for the
presentation of all the projects within the
UDRC. It is envisaged that there would be a strong attendance from Dstl, MOD and the

Industrial stakeholders in the UK defence community
. This will be a unique
audience in which to discuss exploitation of signal processing research.



The areas in which signal processing research is required are shown in Table

Table 1: Requirements for

rocessing in


Signal processing technologies that allow sensor arrays to be
calibrated or recalibrated whilst in
use. These technologies
will help to facilitate low cost, low
aintenance systems. They
include compensating for variations in both the positions and
the electronic characteristics of the sensors (including the
equalisation of receiver channels) all of which can change with

Broadband signal

Signal pr
ocessing technologies that separate broadband
signals received at a sensor array into the signals from
individual targets, enabling correct detection, classification
and localisation (DCL) of multiple simultaneous targets.
Subsets of this technology are co
nventional, adaptive, semi
blind and blind broadband signal separation technologies.
These technologies are also essential in communications


Signal processing technologies that detect when received
signals contain contributions from a t
arget, particularly
against a background of clutter (or reverberation) and


Signal processing technologies that identify or categorise


Signal processing technologies that yield more accurat
e target
bearing estimates (and, where relevant, range estimates)
than conventional localisation techniques, in particular when
targets are close together in angle and/or range.

Multipath mitigation

Signal processing technologies that enable the detection
classification and localisation (DCL) of targets in the presence
of multipath.

Multipath is the term used to describe signals which appear to
come from multiple directions due to echoes/reflections off
large objects. Multipath can render classic DCL syst

Low size, weight and

Signal processing technologies that enable the use of
hardware of reduced size, weight and power (SWAP).
Typically they enable low SWAP processing hardware, but
more innovative technologies may also enable low S
sensor hardware.


Signal processing technologies which enable the DCL of
fleeting or rapidly manoeuvring targets, or using non
sensor arrays

From these areas, a number of key technical challenges
have been identified a
nd these are
listed in the appendix to this call.

Key Dates



Closing date for Submission of Outline Proposals



Panel Review of Proposals

Who Can Apply and
ow to

The EPSRC Funding Guide provides information o
n the eligibility of organisations and


Note that this call is a targeted funding opportunity provid
ed by EPSRC and hence Higher
Education Institutions, and some Research Council Institutes and Independent Research
Organisations are eligible to apply. A list of eligible organisations to apply to EPSRC is
provided at:


Applicants who are unsure about the suitability of their research project are strongly advised
to contact EPSRC/DSTL (see contacts section) before submission.

You should submit yo
ur proposal using the Research Councils’ Joint electronic Submission
S) System (
). When adding a new proposal, you should select
Council ‘EPSRC’, document type ‘

and th
e ‘Outlines

. On the Project
Details page you should select the ‘DSTL
EPSRC Signal Processing
’ Call

Details of which Research Organisations have registered to use Je
S are available from

Outlines may be submitted by individual research groups or by a number of groups either
within a single university or across a number of universities.
The outline should cover the
proposed sci
entific research programme, including management considerations and

No more than 2 sides of A4 text (10pt minimum font size) will be accepted. Margins should
be at least 2 cm.

You may also include a gant chart and letters of support if you wis

If your Outline exceeds the page limit, or does not conform to this format, your
proposal will not be considered.

Due to the nature of the call, with short term and single PhD based grants, there will be no
postal peer review of proposals, the propos
als will be seen and ranked by a panel.

Applicants, and their Research Organisations, must be
prepared to work on an
intellectual Property (IP) agreement with DSTL
You should note that we cannot
allow a grant to start without a signed

Agreement in place.

Note that clicking ‘submit document’ on your proposal form in Je
S initially submits the
proposal to your host organisation’s administration, not to EPSRC. Please ensure you allow
sufficient time for your organisation’s submission process between submit
ting your proposal
to them and the Call closing date



Guidance on the types of support that may be sought and advice on the completion of the
research proposal forms are given on the EPSRC website
) which should be
consulted when preparing all proposals.

Proposals should be submitted to the EPSRC, to be received by 4pm on 29

October 2008.

lection Criteria for


A panel, including independent assessors, speakers and observers from both DSTL and
EPSRC will consider the proposals. The panel will be asked to make recommendations to the
funding bodies. The assessment of proposal
s and the decisions will take account of the
priorities and interests of DSTL, and will be based on the following criteria:

Scientific quality


scientific quality and merit.

A high quality vision

to contribute in at least one of the research areas outli
ned in
the call above

Adventurous and novel

approaches, applying new technologies to the area

Relevenace to the DSTL interests and adherence to the requirements in Table 1.


Further details about the process, resources and application procedures
can be obtained

Dr Simon Crook

Aerospace & Defence


Polaris House

North Star Avenue



Phone: 01793


For clarification or queries on the scope of
the call, please contact:

Paul Thomas

Technical Lead

University Defenc
e Research Centre.

Building 107

] Sensors and Countermeasures,

Porton Down,

Salisbury, Wiltshire SP4 0JQ.

Phone: +44 (0)1980 613734

Email: pathomas@dstl.gov.uk


The ch
allenges in table 2, below, are given as guidance to typical relevant problems. The

strand of the research could tackle any of these challenge areas or other research that
can be shown to be relevant. Research would be conducted at an unclassified l
evel only.

Candidate open proposals will be assessed for DSTL relevance and adherence to the
requirements in Table 1.

Table 2: Technical Challenges for the UDRC

CHALLENGE #1: To produce a performance metric (e.g. minimum variance bound)
for a general non
linear filtering problem that fully characterises the accuracy of
the algorithm

For a non
linear filtering problem, the complete posterior density of the state to be
determined is as a function of time. The best achievable error performance is most often
given in the form of a theoretical Cramer
Rao lower bound which provides a lower bound for
second order (mean
squared) error only. For non
Gaussian posterior densities this is
insufficient to characterise the accuracy of the algorithm.

Many real
life probl
ems are non
Gaussian and thus a more appropriate approach is clearly

Further, the new approach should be applicable to the particularly challenging case of a
static target but with evolving signal information.

CHALLENGE #2: To develop novel new te
chniques which can be used for the
calculation of the posterior distribution of a spiky signal distribution in space
given very limited processing power and a correlated noise structure

If the posterior distribution of a signal is very spiky or 'witc
hes hat' with few peaks it is
notoriously difficult to obtain a good estimate of it, particularly when the noise level is low
relative to the size of the peaks. This problem is further complicated by the fact that very
often inference needs to be made in
processor poor environments. Novel new approaches
are sought to solve this problem using as little processing power as possible.

CHALLENGE #3: To develop a fast and efficient algorithm to simulate non
stationary, spatially and temporally correlated noise

Consider a set of measurements made by many sensors placed in a noisy environment, the
noise is both temporally and spatially correlated and has time varying statistics. Given a
limited set of measurements of this environment, the challenge is to replica
te this noise
signal synthetically. In particular it should be possible to place (synthetic) new sensors
anywhere within the environment or remove them.

CHALLENGE #4: To extract a weak signal from non
stationary, spatially and
temporally correlated noise

Consider a set of measurements made by many sensors placed in a noisy environment, the
noise is both temporally and spatially correlated and has time varying statistics. Given this
environment, characterised by spatial and temporal scales of correlation,

the challenge is to
detect the presence of a weak, stationary signal described by smaller scales of temporal and
spatial correlation.


To detect, identify and analyse signals in the presence of similar

In electronic surveillance,

many current and future challenges involve detection of signals in
the presence of other, similar, signals. The signal environment is extremely busy and thus
the traditional process of detection of a signal buried in noise at reducing signal to noise
o is no longer sufficient. Signals of interest may be at high SNR but need to be detected,
classified, isolated and analysed as close to real time as is possible. All interfering signals are
potentially signals of interest and all overlap in time and frequ


To detect and identify signals within a short observation time
frame and low observation duty cycles

A problem exists to detect and classify multiple signal types, but with a very low duty cycle
for the receiver. In certain circumsta
nces, very short windows of opportunity exist where the
local signal environment can be sampled and the duty cycle of observation opportunities can
be as low as 10%. The signals to be detected may be continuous or intermittent (bursted)
transmissions. With
in these short windows, it is desirable to detect and classify multiple
transmissions in terms of signal type (eg analogue or digital comms, navigation etc.) and
location of transmitters. The low duty cycle of observations for the receiver makes this a
llenging prospect.

CHALLENGE #7: To classify a noisy signal into known constituent parts.

A set of measurements made by a sensor placed in a noisy environment contains
information from the noisy background as well as the signal to be detected. Techniques

required to
try to classify the measurements into groups or classes to remove some of the
background effects. Detecting the true signal in one class within the time history of the
measurements depends on successful classification and minimising overla
p between classes.
For example, if an innocuous measurement is misclassified as hazardous (say) a small
fraction of the time, but occurs in great numbers, then it may trigger a false response in the
hazardous class.

This challenge could also link with wo
rk “to extract a weak signal from non
spatially and temporally correlated noise”.

CHALLENGE #8: To detect and track aerosol cloud signals from spectroscopic
aerosol lidar data

Aerosol lidar systems can detect and monitor aerosol clouds across

an area of several
square kilometres. Automatic cloud detection, even for relatively strong signals, is difficult
to implement due to a variety of artefacts that can be present within the data.

Objects other than aerosol clouds can generate non
ary signals of similar
spatial frequencies and extended signal envelopes.

Signal Amplifier systems will distort observed signals as a function of signal intensity
and frequency.

Atmospheric conditions can change on a spatial and temporal basis, giving rise

localised or general change in signal bias levels.

Changing ambient light levels can affect the spectroscopic signal to noise ratio as a
unique function for each spectral band

The challenge would be to autonomously detect and track the extent of a mo
ving aerosol
cloud in the presence of these above events whilst still maintaining integrity of the spectral
aerosol information.

CHALLENGE #9: To develop a partially supervised learning algorithm to detect and
classify anomalies in real time streaming sp
ectroscopic data.

To provide the artificial dog’s brain to go with the artificial dog’s nose.

Develop a learning algorithm that will accept streaming spectroscopic data (mass, mobility
or optical) and detect anomalies in real time. Further, it should then

attempt to classify
these anomalies as safe (normal), dangerous (previously identified as dangerous) or
unknown, based on previous experience. Finally, if appropriate and possible, it should then
proceed to attempt to classify the danger. Where possible
, the algorithm should attempt to
remain biomimetic in as much as different components should attempt to mimic the different
layers of olfactory processing in mammals.

CHALLENGE #10: To develop a general theory to guide the processing of signals
from Synt
hetic Aperture Radar systems with more than two beams.

Future (and at least one current) Synthetic Aperture Radar systems have active array
antennas which have panels which can be divided into sub
panels to give multiple beams.
There may be significant ben
efit in utilising multiple beams to detect moving targets in
difficult clutter (eg urban, sea etc) at high resolution. The problem is that there is no general
theory to guide the best way of processing the signals from such systems for more than two
when there are many different types and classes of scatterer. There is an elementary
result relating to the greatest eigenvalue and corresponding eigenvector of the covariance
matrix for the signals received by the N beams but this is only useful for a sin
gle class of
target in white noise. In the real life situation where there are complicated and difficult types
of clutter and targets more general results are required. One approach to this problem is via
the covariance matrix for the N channels. For examp
le, in the case of two beams the joint
probability density function of the four parameters of the complex, Hermitian, covariance
matrix (a Wishart distribution) can be transformed into a joint p.d.f of the two eigenvalues
and two angles, one of which is th
e phase shift between the signals received by the two
beams. This is accomplished by means of a special unitary transformation in SU(2). The
resulting transformed p.d.f. then gives useful insights into how detection algorithms can be
constructed for a two
beam system. It is expected that a similar approach for N beams
would yield similar insight. However, this is a challenging programme because the required
theory needs to be based on the algebra and geometry of the special unitary group SU(N).
Even attacki
ng this problem for three beams would be a useful step forward. Much has been
learned about SU(3) in the past few decades. For example, SU(3) is central to the ‘eight fold
way’ developed by Gell
Mann and Ne’eman. Other applications have included SU(3) nucl
physics models based on the symmetry group of the 3D harmonic oscillator. In the case of
more beams, SU(4) and SU(5) also have both been studied in relation to particle physics.

CHALLENGE #11: To develop techniques enabling electronically scanned for
looking radars on fast jets to detect slow low cross section targets in difficult

A major advantage of electronically scanned active antenna arrays is that they can form the
basis of multiple function radars by combining different sub
in different ways. In the
case of fast jets the radars are installed in a nose cone and are forward looking. Such radars
are usually used in an air to air mode or for ground target detection and targeting. However,
in the case of a forward looking radar on

a fast jet almost all moving targets can be viewed
as ‘slow’ in comparison with the speed of the jet. Hence target discrimination against a
stationary clutter background using differential Doppler frequency shifts is a difficult problem
especially when th
e target is stealthy. The challenge is to devise ways of giving such radars
the capability of detecting small slowly moving targets such as small vehicles, mobile missile
launchers and unmanned aerial vehicles and cruise missiles at low altitudes against g
clutter. An approach to this problem is to use Space
Time Adaptive Processing (STAP)
techniques in conjunction with the multiple channels afforded by sub
arrays of the
electronically scanned active array to provide the required discrimination. Howeve
r, for a
forward looking radar, the clutter spectrum is range dependent in such a way as to lead to
range ambiguities. The performance of a STAP technique is likely to be degraded under
these circumstances. On the other hand for a forward looking radar all

clutter Doppler
frequencies are positive. Hence targets with negative frequencies do not compete with the
clutter and this may be an advantage for a STAP technique.

CHALLENGE #12: To devise methods for SAR processing when there are zeros in
the range and

azimuth antenna beam patterns and/or in the transmitted chirp
spectrum and/or the chirp centre frequency is not constant.

Modern Synthetic Aperture Radar (SAR) systems use active antennas comprising hundreds
or thousands of active transmit/receive modules

in a 2
D array. One benefit of this is that it
is possible in principle to phase the signals transmitted and received by the modules in order
to dynamically change the beam pattern as the platform carrying the antenna passes a point
on the ground. Hence,
in principle, notches can be placed in the beam pattern in both
azimuth and range in order to attenuate the reception of signals emitted from a set of
specific points on the ground. In addition, zero bands can be introduced into the transmitted
range chirp

spectrum in order to avoid transmitting in specific spectral bands. An extension
of this is active ‘frequency hopping’ where the centre frequency of the transmitted band is
varied in time. The use of conventional SAR processing on the signals resulting fr
om the
above techniques results in distortions (such as interference fringes, phase discontinuities,
local loss of coherence etc) in the resulting images. This challenge is therefore aimed at
devising practical methods for processing the signals which resu
lt from the above techniques
to form SAR images with low distortion. It is expected that this work would also provide
insight on the limitations on antenna beam shape and discontinuous spectra which can be
used for SAR systems.

CHALLENGE #13: To develop g
eneral algorithms for distributed signal fusion in a
network of sensors

In a spatially distributed wireless network of sensors where transmission or power
constraints limit the range of broadcast of signal, there may be no single node at which
fusion can o
ccur. Furthermore, no sensor can see all the transmissions from all the other
nodes. In this environment sensors are forced to perform local fusion using the signals they
have direct access to, in combination with the (indirectly transmitted) outputs fro
m other
sensors’ local fusion.

The challenge is to develop generic algorithms which can perform fusion under these

CHALLENGE #14: To automate the assimilation and analysis of the audio and visual
representation of acoustic data.

Submarine pas
sive sonar operators use a combination of visual and audio data to detect,
track and identify noise sources. The visual display shows energy as a function of bearing
and time such that noise sources appear as tracks whose appearance on the screen depends
n their motion relative to the sonar. The audio information is the acoustic data in the
direction selected by the operator. The operational process is for the operator to listen to
tracks that he can see on the display to identify the noise source. Operato
rs also use
spectral information to assess parameters when the noise source has been identified as a
merchant vessel. The challenge is to produce an algorithm capable of identification and
classification of merchant vessels and biological sources which exp
loits the information used
by the human analyst.

CHALLENGE #15: To derive a beam
former with a greatly reduced computational
load (factor of 10).

Submarine sonar arrays comprise several thousand hydrophone channels being sampled at
over 10kHz using a 24bi
t analogue to digital converter. The beam
former is required to
manipulate this data to provide wide azimuthal coverage using multiple beams known as
fans. This is a major computational task requiring substantial processing capability.
Limitation in proces
sing power precludes simultaneous formation of beams in all the
directions that are potentially available. A case for consideration is an array 64 elements
square with a 10 cm separation, a maximum acoustic frequency of 7.5kHz and a sampling
rate of 18kHz.

The challenge is to derive an alternative approach to beamforming that is
more efficient than the existing time
domain beamformers.

CHALLENGE #16: To distinguish between man
made echo sounding pulses and
those made by marine mammals

Military sonar system
s, fish
finding sonar, echo
sounders and marine mammals all use
pulses for detection of objects underwater. Military intercept sonar systems need to
distinguish between these different sources of noise. This is difficult because they often have
similar cha
racteristics in terms of frequency and pulse length. Furthermore distortion can be
introduced during propagation through the ocean. The challenge is to identify properties of
the man
made and biological signals that enable them to be separated.

#17: Techniques for identification of spatial clustering in data

In a linear search, where a searcher is travelling back and forth along the same track or
parallel tracks in a ladder pattern, the challenge is to develop a method of signal processi
such that spatial clustering of signals within the data streams may be identified. The signal
tends to be weak and/or ill
defined against a background of clutter and therefore may be
hard to distinguish.

CHALLENGE #18: Processing techniques for automat
ically identifying within a data
stream spurious steps or transients of varying characteristics.

It is not uncommon for time series data to contain spurious “steps” or transients due to
sensor/hardware problems or other external factors. The initial challe
nge is to identify these
features by some form of automated algorithm.

A “step” in this case is described as a rise then fall (or vice versa) in the DC level of the time
series from the background level of the data. The gradient of the steps vary from
case to
case but generally settle at the new level in less than 0.25 seconds for data sampled at
100Hz. The duration that the data is at the increased or decreased level also varies for
different features, but generally occurs for less than 60 seconds. T
he levels of the DC shifts
also vary for different features and are typically small so they cannot be seen without
zooming in on the specific regions of data. The aim is to automatically identify the whole
time period of a step. A typical transient featur
e can be described as a spike, or a step
feature with a DC level shift that does not return back to its original level, or just gradually
returns back to its original level (not via a 2

edge in the opposite direction).

CHALLENGE #19: Methods for correct
ion or handling data drop
outs, spikes or DC

For time series data which contain data dropouts, spikes/glitches or DC shifts, identify
solutions for correcting the data without compromising the statistical characteristics &
spectral content of the d

It is desirable to remove or amend these sections for statistical analysis or for improving the
signal to noise ratio, however the solution should have minimal impact on the rest of the

CHALLENGE #20: To develop signal processing that can ide
ntify and track the
acoustic micro
Doppler signature from a moving target against the interference
from other moving non
targets and natural environment.

How can moving targets be tracked using acoustic micro
Doppler? The difficulty in utilising
the micro
Doppler signature is the resolution that is available and the duration of individual
components. This in turn makes it difficult to separate the micro
Doppler from similar
characteristics that come from other moving non
targets and the natural environment
and to
characterise the target. The micro
Doppler may come from the vibrations or movement of a
structure, such as a helicopter or the swinging arms and legs of walking person, or a slow
moving swimmer. Characterisation of the micro
Doppler or differences
between signatures
(if available) could be used to discriminate between individuals, identify faults or even to
predict failure or intent.


To develop signal processing that can identify and decode
periodic information in incomplete and noi
sy signals.

How can weak and multiple periodicity be identified in noisy signals? The difficulty is in
identifying weak and/or multiple periodic information that is degraded or distorted by the
background. Acoustic signals contain more information than the
ir frequency content:
temporal characteristics and periodicity. Many animals depend on reliable identification of
this periodicity in the acoustic signals they perceive in their

natural environment in order to navigate and communicate. Periodicity can be
utilised to
discriminate between man
made and natural objects or events (targets) but the background
or natural environment can contain many persistent non
targets that appear target
like and
confuse existing solutions.

CHALLENGE #22: To exploit environme
ntal and contextual information in image
processing to detect and identify objects observed in a variety of conditions.

How can context information be utilised to improve image classification? Clues that can help
recognise objects are often found in the ba
ckground or clutter. The difficulty is in
incorporating this information into an image processing scheme when the correlation
between object and clutter is unknown and the statistics of the environment are poorly
characterised. High frequency sonars are no
w a prime choice for high resolution imaging of
the seabed. Detectors conventionally look at threshold detection or a highlight/shadow
dichotomy. This approach works extremely well for high resolution images in very benign
and easy environments. Automatic
classification rates in these cases are comparable with
operator classification rates. However, in moderate to difficult environments, detection and
classification rates fall considerably and human performance far exceeds computer aided
classification. The

hypothesis is that most detectors and classification schemes fail to tackle
issues such as high clutter density, camouflage and sand ripples. In many cases, the human
eye can easily detect and classify these targets; however, automatic target recognition
(ATR) fails.

To identify an optimal solution to the problem of failed array

How can synthetic array processing infill failed modules? The difficulty is in the different
scales of correlation that exist between the signal and noise
across the array. Submarine
sonar systems, comprising several thousand hydrophones, have a major problem coping
with failed hydrophone channels, modules and multiplexers. In one current system, nearly
half of the hydrophones in the array are unavailable du
e to failure of power distribution
pods. Repair work to recover the availability of these modules would require the submarine
to return to dry dock, taking the submarine out of service for several months at a cost of
several million pounds. In another case

the system documentation states that processing
will be stopped if two out of four modules fail. The loss of capability could cause the mission
to fail or even to be abandoned. The idea of synthetic infill is to develop a method of
synthesising data from
the failed module

this would predict the signal of failed modules
from the phase corrected signals of neighbouring modules at a time when the existing
modules were in the physical position of the failed modules. While the concept is intuitively
simple th
e implementation of the idea has some important issues such as nearfield/farfield
corrections and undefined absolute array positioning which would need to be addressed.

To develop new signal processing that can offer robust detection
and id
entification of objects or obstacles where the specific object to be detected
is not pre

How can undefined objects be reliably identified and recognised? There is a difficulty in
recognising an object that is not pre
determined or whose chara
cteristics are changing but is
known to effect a fault or some other impact such as an improvised explosive device.
Detecting novel events not in the modelling or training data is important for any effective
processor. Existing methods (e.g. pattern recogn
ition, Markov models, neural networks)
require either prior knowledge about various novelty conditions or models of the monitored
system. New ideas borrowed from immunology could offer a more robust method that
detects any unacceptable (unseen) change rath
er than looking for specific (known)
abnormal or novel activity. This has already found application in fault detection and
structural and biological monitoring. The same immune
based features e.g. positive
negative selection, partial matching, learning and

memory, would apply to any other area of
signal processing using system monitoring or any target detection problem where the
specific object to be detected is not pre

CHALLENGE #25: To develop signal processing that is tolerant to global chan
ges in
the environment to identify and locate low observable objects whose
characteristics depend on the surrounding environment.

How can low observable objects be identified in a non
stationary environment? The difficulty
is that the background against wh
ich the object is to be detected is changing and
characteristics of the object can depend on these changes. There may also be intermittent
information available as the background obscures or occludes the object. Such a problem is
faced by mine clearance op
erations in the surf and beach zone. This is a hostile environment
that moves between aquatic and terrestrial settings; the environment includes the beach
which can change between dry and wet conditions, and levels of saturated sand, due to tides
and water

surge, and the surf zone which extends from the beach out to very shallow water.
Future operations should transition seamlessly from deep water through very shallow

water (VSW) and the surf zone (SZ) to the beach zone (BZ). This supports the operational
equirement to conduct rapid landing on defended beaches. Accurate identification and
location would enable quicker and safer mine clearance. There is also a requirement for safe
off detection of (metallic and non
metallic) weapons carried on persons
under clothing
in a range of changing environments: school and government buildings, transportation
terminals, and other public places. The difficulty remains that limited discriminatory
information is available when the object is concealed.

To develop signal processing to optimise acoustic
communications for mobile platforms in urban and littoral environments which
acts as a limiter to high capacity communications.

How can high capacity acoustic communications be optimised in a limiting env
ironment? The
constraints lie with the limited and intermittent information that is made available in the
environment and the connectivity between 'disadvantaged' users. Future joint capability
depends crucially on reliable, secure and timely communication
s and the availability of high
capacity communication networks. The requirement is for optimised communications
between 'disadvantaged' users in challenging environments without compromising stealth.
This include underwater communications for autonomous ve
hicles and submarines, and
communications between underwater and air platforms.

CHALLENGE #27: To develop data processing to enhance target features in image
recognition tasks.

How can object or event identification be optimised using feature enhancement?

There is a
known difficulty, beyond current feature enhancement capability, in selecting features that
are not pre
defined in order to enhance only the recognition of the object. There is a
requirement in sidescan sonar images and/or synthetic aperture so
nar (SAS) images for
increased resolution, region segmentation, sidelobe reduction and speckle suppression.
Sensors are increasingly improving in resolution but the underwater environment still
remains as hostile and the information content of the images r
emain poor. Ideally one would
expect to employ autonomous vehicles armed with intelligent automatic target recognition
algorithms to distinguish between potentially dangerous and non
threatening objects. Their
success depends on how well sonar images exhib
it certain features of the underlying scene
and how those features are combined and presented to the classifier.


To develop locally invariant signal processing to discriminate
between key man
made and natural features.

The difficulty in rel
iable identification of objects and discrimination between targets and non
targets and across classes of targets under different conditions is selecting features that are
invariant (or tolerant) to identify
preserving changes such as position, size, backgr
ound or
viewing conditions. Most features fail to be both discriminatory and robust while the problem
is faced routinely in minehunting and weapons sonar, and indeed in everyday life. Texture
classification based on fractals could offer a means to discrimi
nate between features such as
edges and uniform regions which can be used at different resolutions to separate man
objects from the natural environment. When there are limited features available or the
background has time
varying characteristics it ca
n be difficult to extract suitable features
e.g. in noisy sidescan sonar images. Fractional calculus can be utilised to detect weak
signals and it becomes a requirement to understand how different fractional orders could
offer an optimal solution to extrac
ting key features.

CHALLENGE #29: To analyse an element specific spectral profile and identify the
most likely material present.

Neutron activation spectroscopy is used to provide non
invasive classification of materials
through a range of barriers. A wi
de angle neutron beam is used to induce nuclear
transitions in the materials present which give rise to element specific gamma
ray photons.
Using timing information, elemental spectra are collected for each volume element (voxel)
covering the target volum
e. Currently, these spectra are first matched to library profiles for
each element (e.g. carbon, nitrogen, oxygen) to produce an elemental signature. This
elemental signature is then classified against a second library to identify the material
The challenge is to derive a more efficient and robust technique for material
classification that will also provide confidence levels on the material predicted.

CHALLENGE #30: To automatically segment complex X
ray images into object

ray tran
smission imaging is used in aviation security to screen passenger baggage for
threat objects. Many algorithms exist (Sobel, Canny, Phase Congruency) that will extract
edges and other features from an image. Currently, segmented multi
view X
ray images
e being investigated for 3D image reconstruction, where the accuracy of the reconstruction
depends on the effectiveness of the edge detection algorithm. The challenge is to derive an
alternative edge detection approach (not necessarily a single algorithm)

that can improve
the accuracy of 3D image reconstruction.