• Ingen resultater fundet

Discussion: Pros and Cons of the Approach

The main advantages of teaching duality based on multipliers are in my opinion

• the independence of the problem modeled by the primal model and the introduction of the dual problem, i.e. that no story has go with the dual problem,

• the possibility to avoid problem transformation and “duality by definition”

in the introduction of general duality in linear programming,

• the clarification of the interplay between the sign of variables and the type of the corresponding constraints in the dual pair of problems,

• the early introduction of the idea of getting information about the opti-mum of an optimization problem through bounding using the solution of an easier problem,

• the possibility of introducing partial dualization by including only some constraint violations in the objective function, and

• the resemblance with duality in non-linear programming, cf. [3].

The only disadvantage in my view is one listed also as an advantage:

• the independence of the introduction of the dual problem and the problem modelled by the primal model

since this may make the initial motivation weaker. I do not advocate that du-ality should be taught based solely on the multiplier approach, but rather that it is used as a supplement to the traditional presentation (or vice versa). In my experience, it offers a valuable supplement, which can be used to avoid the situation of frustrated students searching for an intuitive interpretation of the dual problem in cases, where such an interpretation is not natural. The deci-sion on whether to give the traditional presentation of duality or the multiplier approach first of course depends on the particular audience.

[1] R. E. Bellman and S. E. Dreyfus, Applied Dynamic Programming, Prince-ton University Press, 1962.

[2] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, 1963.

[3] A. M. Geoffrion, Duality in Non-Linear Programming: A Simplified Applications-Oriented Approach, SIAM Review 13, 1 1 - 37, 1971.

[4] A. M. Geoffrion, Lagrangean Relaxation for Integer Programming, Math.

Prog. Study282 - 114, 1974.

[5] M. X. Goemans, Linear Programming, Course Notes, 1994, available from http://theory.lcs.mit.edu/ goemans.

[6] J. Krarup and C. Vanderhoeft, Belgium’s Best Beer and Other Stories, VUB Press (to appear).

[7] K. G. Murty, Linear and Combinatorial Programming, John Wiley, 1976.

[8] H. P. Williams, Model Solving in Mathematical Programming, John Wiley, 1993.

[9] W. L. Winston, Operations Research - Applications and Algorithms, Int.

Thomson Publishing, 1994.

The Minimum Spanning Tree Problem

The minimum spanning tree problem is one of the oldest and most basic graph problems in computer science and Operations Research. Its history dates back to Boruvka’s algorithm developed in 1926 and still today it is an actively researched problem. Boruvka, a Czech mathematician was investigating techniques to se-cure an efficient electrical coverage of Bohemia.

Let an undirected graphG= (V, E) be given. Each of the edges are assigned a cost (or weight or length) w, that is, we have a cost functionw:E →R. The network will be denotedG= (V, E, w). We will generally assume the cost to be non-negative, but as we will see it is more a matter of convenience.

Let us recall that a tree is a connected graph with no cycles, and that a spanning tree is a subgraph inGthat is a tree and includes all vertices ofGi.e. V. The minimum spanning tree (MST) problem calls for finding a spanning tree whose total cost is minimum. The total cost is measured as the sum of the costs of all the edges in the tree. Figure 3.1 shows an MST in a graphG. Note that a network can have many MST’s.

1 2

(b) Not a spanning tree as h1,4,6i and h2,3,5i

(c) Not a spanning tree as the subgraph contains a

(d) A spanning tree with a value of 18 tree with a value of 12

Figure 3.1: Examples illustrating the concept of spanning and minimum span-ning tree.

We will denoted edges in the spanning tree as tree edges and those edges not part of the spanning tree asnontree edges.

The minimum spanning tree problem arises in a wide number of applications both as a “stand alone” problem but also as subproblem in more complex prob-lems.

As an example consider a telecommunication company planning to lay cables to a new neighborhood. In its operations it is constrained to only lay cables along certain connections represented by the edges. At the same time a network that reaches all houses must be build (ie. a spanning tree). For each edge a cost of digging down the cables have been computed. The MST gives us the cheapest solution that connects all houses.

At some few occasions you might meet the maximum spanning tree problem (or as Wolsey calls it the “maximum weight problem”). Solving the maximum spanning tree problem, where we wish to maximize the total cost of the con-stituent edges, can be solved by multiplying all costs with−1 and the solve the minimum spanning problem.

3.1 Optimality conditions

A central concept in understanding and proving the validity of the algorithms for the MST is the concept of acut in an undirected graph. A cutC={X, X} is a partition of the set of vertices into two subsets. For X⊂V, denote the set of arcs crossing the cut δX={(u, v)∈E :u∈X, v∈V \X}. The notion of a cut is shown in Figure 3.2. Edges that have one endpoint in the setX and the other in the endpointX is said to cross the cut.

We say that a cutrespects a setAof edges if no edge inAcrosses the cut.

Let us use the definition of a cut to establish a mathematical model for the problem.

We define a binary variablexij for each edge (i, j). Let xij =

1 if (i, j) is in the subgraph 0 otherwise

For a spanning tree of n vertices we needn−1 edges. Furthermore, for each possible cutCat least one of the edges crossing the cut must be in the solution

X

X’

Figure 3.2: The cut C defined by the partitioning X ⊂V and X =V \X is shown for a graphG. The edges crossing the cut{X, X}are shown in bold.

– otherwise the subgraph defined by the solution is not connected. Therefore we get the following model:

The Cut-set formulation of the MST

min X

e∈E

wexe

s.t. X

e∈E

xe=n−1 X

e∈δ(S)

xe≥1, ∅ ⊂S⊂V xe∈ {0,1}

Sub tour elimination formulation of the MST

min X

e∈E

wexe

s.t. X

e∈E

xe=n−1 X

e∈γ(S)

xe≤ |S| −1, ∅ ⊂S⊂V xe∈ {0,1}

whereγ(S) ={(i, j) :i∈S, j∈S}.

Note that with the definition of a cut if we delete any tree edge (i, j) from a

spanning tree it will partition the vertex set V into two subsets. Furthermore if we insert an edge (i, j) into a spanning tree we will create exactly one cycle.

Finally, for any pair of vertices i and j, the path from i to j in the spanning tree will be uniquely defined (see Figure 3.3).

1 2

3

4 5

6 4 4

1

2

3

6 3

2

Figure 3.3: Adding the edge (1,4) to the spanning tree forms a unique cycle h1,4,5,2,1i

In this chapter we will look at two algorithms for solving the minimum spanning tree problem. Although different they both use the greedy principle in building an optimal solution. The greedy principle advocates making the choice that is best at the moment, and although it in general is not guaranteed to provide optimal solutions it does in this case. In many textbooks on algorithms you will also see the algorithms for the minimum spanning tree problem being used as it is a classical application of the greedy principle.

The greedy strategy shared by both our algorithms can be described by the following “generic” algorithm, which grows the minimum spanning tree one edge at a time. The algorithm maintains a setAwith the following condition:

Prior to each iteration, A is a subset of edges of some minimum spanning tree.

At each iteration we determine an edge (i, j) that can be added to A without breaking the condition stated above. So that ifA is a subset of edges of some minimum spanning tree before the iteration thenA∪ {(i, j)}is a subset of edges of some minimum spanning tree before the next iteration.

An edge with this property will be denoted asafe edge. So we get the following

selection rule for a “generic” algorithm.

Algorithm 1: The Selection rule

select a cut in Gthat does not contain any selected edges

1

determine a safe edge of the cut-set and select it

2

if there are more than one safe edge select a random one of these

3

So how do we use the selection rule?

• The selection rule is used n−1 times as there are n−1 edges in a tree withnvertices. Each iteration will select exactly one edge.

• Initially none of the edges in the graph are selected. During the process one edge at a time will be selected and when the methods stops the selected edges form a MST for the given graphG.

Now the interesting question is, of course, how do find these safe edges? First, a safe edge must exist. The condition defines that there exists a spanning tree T such thatA⊆T. AsA must be a proper subset ofT there must be an edge (i, j)∈T such that (i, j)6∈A. The edge (i, j) is therefore a safe edge.

In order to come up with a rule for recognizing safe edges we need to define a light edge. A light edge crossing a cut is an edge with the minimum weight of any edge crossing the cut. Note that there can be more than one light edge crossing a cut. Now we can state our rule for recognizing safe edges.

Theorem 3.1 Let G= (V, E, w)be a connected, undirected graph (V, E) with a weight function w : E → R. Let A be a subset of E that is part of some minimum spanning tree forG. Let{S,S}¯ be any cut ofGthat respectsA, and let(i, j)be a light edge crossing the cut{S,S}. Then, edge¯ (i, j)is safe for A.

Proof: LetT be a minimum spanning tree that includesA. IfT contains the light edge (i, j) we are done, so lets assume that this is not the case.

Now the general idea of the proof is to construct another minimum spanning treeT that includesA∪ {(i, j)}by using a cut-and-paste technique. This will then prove that (i, j) is a safe edge.

If we look at the graphT∪ {(i, j)}it contains a cycle (see Figure 3.4). Let us call this cyclep. Sincei andj are on opposite sides of the cut{S,S¯}, there is at least one edge inT on the pathpthat also crosses the cut. Let (k, l) be such

i j l k

S

S’

p

Figure 3.4: The edges in the minimum spanning tree T is shown, but not the edges inG. The edges already inAare drawn extra fat, and (i, j) is a light edge crossing the cut {S, S = ¯S}. The edge (k, l) is an edge on the unique pathp from itoj in T. A minimum spanning treeT that contains (i, j) is formed by removing the edge (k, l) fromT and adding (i, j).

an edge. As the cut respects A (k, l) cannot be in A. Furthermore as (k, l) is on the unique path in T from ito j T would decompose into two components if we remove (k, l).

Adding (i, j) reconnects the two components thus forming a new spanning tree T=T\ {(k, l)} ∪ {(i, j)}. The question is whetherT is a spanning tree?

Since (i, j) is a light edge (that was one of the initial reasons for picking it) crossing the cut{S,S}¯ and (k, l) also crosses the cut

wij ≤wkl

And therefore

wT = wT −wkl+wij

≤ wT

and asT is a minimum spanning tree we havewT ≤wT. SoT must also be a minimum spanning tree.

What we miss is just to prove that (i, j) is actually a safe edge forA. AsA⊆T and (k, l)6∈A then A ⊆T, so A∪ {(i, j)} ⊆ T. Consequently, since T is a minimum spanning tree, (i, j) is safe forA.△

Now we can look at how an execution of the generic algorithm would look like.

The selection rule tells us to pick a cut respecting Aand then choose an edge of minimum weight in this cut.

Figure 3.5 illustrates the execution of the generic algorithm. Initially no edges are selected. We can consider any cut we would like. Let us use the cutX ={6}

andX=V\X (a). The cheapest edge crossing the cut is (4,6). It is is added to the solution. We can now pick any cut not containing the edge (4,8). We consider the cut X = {1,4,6} (b). Cheapest edge crossing the cut is (4,5) which is added to the solution. Next We consider the cut X ={2} (c). Here the cheapest edge crossing the cut is (2,3) which is added to the solution. Now let us consider the cutX={1,2,3}(d). The edge (3,5) is now identified as the cheapest and is added to the solution. The next cut we consider isX ={1}(e).

This will add (1,4) to the solution, which has become a spanning tree. We can not find any cuts that does not contain edges in the tree crossing the cut (f).

More conceptually A defines a graph GA = (V, A). GA is a forest, and each of the connected components ofGA is a tree. When the algorithm begins Ais empty and the forests containsntrees, one for each vertex. Moreover, any safe edge (i, j) for Aconnects distinct components ofGA, sinceA∪ {(i, j)}must be acyclic.

The two algorithms in section 3.2 and 3.3 use the following corollary to our theorem.

Corollary 3.2 Let G= (V, E, w) be a connected, undirected graph with weight function w:E→R. LetA be a subset of E that is included in some minimum spanning tree forG, and let C = (VC, EC) be a component in the forest GA= (V, A). If (i, j) is a light edge connecting C to some other component in GA, then(i, j)is safe forA.

Proof: The cut {VC,V¯C} respects A, and (i, j) is a light edge for this cut.

Therefore (i, j) is safe for A. △

1 2

Figure 3.5: The execution of the generic algorithm on our sample graph.