Applications - Network blog

rufftartΤεχνίτη Νοημοσύνη και Ρομποτική

29 Οκτ 2013 (πριν από 4 χρόνια και 14 μέρες)

55 εμφανίσεις

UNIT VII


INTRODUCTION TO DYNAMIC PROGRAMMING:

In
mathematics
,
computer science
, and
economics
,
dynamic programming

is a method for
solving complex problems by breaking them down into simpler subproblems. It is applicable
to problems exhibiting the properties of
overlapping subproblems
[1]

and
optimal substructure

(described below). When applicable, the method takes far less time than naive methods that
don't take advantage of the subproblem overlap (like
depth
-
first search
).

The idea behind dynamic programming is quite simple. In general, to solve a given problem,
we need to solve different parts of the problem (subproblems), then combine the solutions of
the subproblems to
reach an overall solution. Often when using a more naive method, many
of the subproblems are generated and solved many times. The dynamic programming
approach seeks to solve each subproblem only once, thus reducing the number of
computations: once the solu
tion to a given subproblem has been computed, it is stored or
"
memo
-
ized
": the next time the same solution is needed, it is simply looked up. This
approach is especially useful when
the number of repeating subproblems
grows exponentially

as a function of the size of the input.

Dynamic programming algorithms are used for optimization (for example, f
inding the
shortest path between two points, or the fastest way to multiply many matrices). A dynamic
programming algorithm will examine all possible ways to solve the problem and will pick the
best solution. Therefore, we can roughly think of dynamic prog
ramming as an intelligent,
brute
-
force method that enables us to go through all possible solutions to pick the best one. If
the scope of the problem is such that going through all possible solutions is possible and fast
enough, dynamic programming guarante
es finding the optimal solution. The alternatives are
many, such as using a greedy algorithm, which picks the best possible choice "at any possible
branch in the road". While a greedy algorithm does not guarantee the optimal solution, it is
faster. Fortuna
tely, some greedy algorithms (such as minimum spanning trees) are proven to
lead to the optimal solution.

For example, let's say that you have to get from point A to point B as fast as possible, in a
given city, during rush hour. A dynamic programming algo
rithm will look into the entire
traffic report, looking into all possible combinations of roads you might take, and will only
then tell you which way is the fastest. Of course, you might have to wait for a while until the
algorithm finishes, and only then
can you start driving. The path you will take will be the
fastest one (assuming that nothing changed in the external environment). On the other hand, a
greedy algorithm will start you driving immediately and will pick the road that looks the
fastest at eve
ry intersection. As you can imagine, this strategy might not lead to the fastest
arrival time, since you might take some "easy" streets and then find yourself hopelessly stuck
in a traffic jam.

Sometimes, applying memoization to a naive basic recursive sol
ution already results in an
optimal dynamic programming solution, however many problems require more sophisticated
dynamic programming algorithms. Some of these may be recursive as well but parametrized
differently from the naive solution. Others can be mo
re complicated and cannot be
implemented as a recursive function with memoization. Examples of these are the two
solutions to the Egg Dropping puzzle below.

SHORTEST PATH:

the
shortest path problem

is the problem of finding a
path

between two
vertices

(or nodes)
in a
graph

such that the sum of the
weights

of its constituent edges is minimized.

Th
is is analogous to the problem of finding the shortest path between two intersections on a
road map: the graph's vertices correspond to intersections and the edges correspond to road
segments, each weighted by the length of its road segment.

Algorithms:

The most important algorithms for solving this problem are:



Dijkstra's algorithm

solves the single
-
source shortest path problems.



Bellman

Ford algorithm

solves the single
-
source problem if edge weights may be negative.



A* search algorithm

solves for single pair shortest path using heuristics to try to speed up the
search.



Floyd

Warshall alg
orithm

solves all pairs shortest paths.



Johnson's algorithm

solves all pairs shortest paths, and may be faster than Floyd

Warshall on
sparse graphs

Applications:

Shortest path algorithms are applied to automatically find directions between physical
locations, such as driving directions on
web mapping

websites like
Mapquest

or
Google
Maps
. For this application fast specialized algor
ithms are available.
[2]

If one represents a nondeterministic
abstract machine

as a graph where vertices describe
states and edges describe possible transitions, shortest path algorithms can be used to find an
optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the
time needed to reach a given
state. For example, if vertices represents the states of a puzzle
like a
Rubik's Cube

and each directed edge corresponds to a single move or turn, shortest
path algorithms can be

used to find a solution that uses the minimum possible number of
moves.

In a networking or telecommunications mindset, this shortest path problem is sometimes
called the min
-
delay path problem and usually tied with a
widest path problem
. For example,
the algorithm may seek the shortest (min
-
delay) widest path, or widest shortest (min
-
delay)
path.

A more lighthearted application is the games of "
six degrees of separation
" that try to find the
shortest path in graphs like movie stars appearing in the same film.

Other applications, often studied in
operations research
, include plant and facility layout,
robotics
,
transportation
, and
VLSI

design".

Knapsack problem:

The
knapsack problem

or
rucksack problem

is a problem in
combinatorial optimization
:
Given a set of items, each with a weight and a value, determine the number of each item to
include in a collection so that the total weigh
t is less than or equal to a given limit and the
total value is as large as possible. It derives its name from the problem faced by someone who
is constrained by a fixed
-
size
knapsack

and
must fill it with the most valuable items.

The problem often arises in
resource allocation

where there are financial constraints and is
studied in fields such as
combinatorics
,
computer science
,
complexity theory
,
cryptography

and
applied mathematics
.

The knapsack problem has been studied for more than a century, with early works dating as
far back as 1897.
[1]

It is not known how the name "knapsack problem" originated, though the
problem was referred to as such in the early works of mathematician
Tobias Dantzig

(1884

1956),
[2]

suggesting that the name could have existed in folklore before a mathematical
problem had been fully defined.

Applications

A 1998 study of the
Stony Brook University Algorithm Repository

showed that, out of 75
algorithmic problems, the knapsack problem was the 18th most popular and the 4th most
needed after
kd
-
trees
,
suffix trees
, and the
bin packing problem
.
[4]

Knapsack problems appear in real
-
world decision
-
making processes in a wide variety of
fields, such as finding the least wasteful way to cut raw materials,
[5]

selection of
capital
investments

and
financial portfolios
,
[6]

selection of assets for
asset
-
backed securitization
,
[7]

and generating keys for the
Merkle

Hell
man knapsack cryptosystem
.
[8]

One early application of knapsack algorithms was in the construction and scoring of tests in
which the test
-
takers have a choice as to which qu
estions they answer. For small examples it
is a fairly simple process to provide the test
-
takers with such a choice. For example, if an
exam contains 12 questions each worth 10 points, the test
-
taker need only answer 10
questions to achieve a maximum possi
ble score of 100 points. However, on tests with a
heterogeneous distribution of point values

i.e. different questions are worth different point
values


it is more difficult to provide choices. Feuerman and Weiss proposed a system in
which students are give
n a heterogeneous test with a total of 125 possible points. The
students are asked to answer all of the questions to the best of their abilities. Of the possible
subsets of problems whose total point values add up to 100, a knapsack algorithm would
determi
ne which subset gives each student the highest possible score.