User:Zelhar/Algorithms1 maman12



P1
Let T(t) be a spanning tree. Then we have: $$ w(T(t)) = \sum_{e \in T} f_{e}(t) = \sum_{e \in T} a_{e}t + b_{e} = (\sum_{e \in T}a_{e} )t + \sum b_{e} $$ So w(T(t)) is linear in t, and therefor it gets its minimum at t=0 or t=1. It follows that to find the minimal MST of all 0<=t<=1, it suffices to find MST for t=0, t=1 and take the minimal of the two results. The Algotithm is therefore: MST-KRUSKAL-t(G,w) »Find MST for G(0) using MST-KRUSKAL(G,w(0)) and set x = the MST's weight »Find MST for G(1) using MST-KRUSKAL(G,w(1)) and set y = the MST's weight return max(x,y)

P2
The claim is True: Arrange E(G) in a nondecreasing order such that for two edges of equal weight e,g, if e &isin; T and g &notin; T, Then e precedes g. Enumerate E(T) according to such order: E(T) = {e1,...,en-1}. Claim: if we run MST-Kruskal(G,w) and the edges are sorted according to the order defined above, then it will return T. Proof (by induction): The first edge picked up by the algorithm is e1 since it is the smalles by the order defined above and since Find-Set(u1) != Find-Set(v1) since at the first step The forset consists entirely of sigletons. assume that the first k edges picked up by the algorithms were e1,..., ek, we show that the next edge to be picked up is ek+1: (assume that for all i ei = (ui,vi)), let e=(u,v) be an edge such that w(ek = w(e) < w(ek+1), so e is examined by the algorithms after ek but before ek+1. If Find-set(u)!=Find-Set(v) then T is not a MST, because we could join e=(u,v) to the tree, and remove from the cycle that e creates an edge other than e1,...,ek which will be of a greater weight than w(e), and thus get a tree of smaller weight. Therefore since T is a MST, it forces the result Find-set(u)=Find-Set(v), and so the algorithm rejects e=(u,v), and so the algorithms rejects any edge e that is between ek and ek+1. ek+1 is the next edge to be examined, and Find-Set(uk+1)!=Find-Set(vk+1), because if we remove ek+1 from T we get two dijointd connected components, R,Q. such that uk+1 &isin; R, vk+1 &isin; Q. But if we had Find-Set(uk+1)=Find-Set(vk+1) prior to the addition of ek+1 to A by the Algorithms, then we would get a path in T connecting uk+1 to vk+1 (since all the edges so far belonged to T), and so T has a cycle which is impossible. therefore ek+1 is picked up by the Kruskal algorithms, which proves the iduction. It Follows that the algorithm must pick exactly the edges e1,...,en-1 (afterwards it can pick no more edges since T is a Spanning tree), and so the claim has been proven.

P3
Claim 3.1: If T is a spanning tree that is not a MST, than T can be improved by replacing one edge such that the resulting tree T' is a spanning tree of strickly smaller weight. proof: Sort the edges of T according to their wight. For an edge ei = (ui,vi) &isin; T, Let Ui, Vi be the connected components of ui, in the forest vi, respectively, in the forrest {e1,..., ei-1} (The grapg on V with the edges shown left). Let ek be the first edge in T (by weight) such that w(ek) is not minimal among all w(e) for edges e that joins two separate components in the forrest {e1,..., ei-1}, and let e be such a minimal edge. Such ek has to exists, because otherwise, each edge is a minimal edge that connects different Trees in the forest and it would have been picked by Kruskal algorithm (if we sort edges of equal weights so that those in T precedes those not in T, By P2), and so T would have been a MST which we are assured it isn't. Now add e to T, in the cycle that e closes there is an edge of greater weight than e, because we know that e1, ..., ek-1 and e don't close a cycle in T, and all other edges in T are of greater weight. remove such an edge and get a an improved spanning Tree T' with a stricktly smaller weight.

conclusion 3.2: If T is a spanning tree, than T can be successively improved a finite amount of times until the rsulting tree, T' is a MST. Proof: Since the improved tree T' of a spanning tree T that is not a MST is of strictly smaller weight, and since we can only improve so far that we reach the minimal weight of a MST, after a finite number of successive improvements we get a MST.

conclusion 3.3: If T is the uniqe MST of G, the a second best spanning tree is a tree that is one improvement apart from T, and such that it has the minimal weight of all such trees. Proof: by claim 3.2 T can be reached by successively improving from any starting tree. A tree T', that is just one improvement away from T and that the improvement is minimal (in term of weight difference between T' and T), is of smaller weight than any other non MST, T", for such a tree requires at least one improvement and thus its weight difference from T is greater by the minimality of T'. T' and T differ by one edge, and so T' is the second best spanning tree.

claim3.4: If there are at least two distinct MSTs, than a second best to an MST is also an MST. Proof: Let T,H be two distinct MSTs, let e &isin; T, h &isin; H be the first pair of edges that H,T differ on (so that they agree on all edges of smaller weight than w(h) and w(t)). By P2 for a certain ordering that preseves weight, H is the tree returned by Kruskal (and the same is true for T). It follows that both h and t are of minimal wight among the edges that connects different trees in the forrest defined by the preceding edges (on which H and T agree). So we have w(h) = w(t). If we add h to T then it closes a cycle, but h doesn't close a cycle with the edges that were picked before h (and t), so by the minimality of h, the cycle contains an edge g, with weight w(g)>=w(h), in fact, it must be that w(g)=w(h) because T is a MST and can't be improved. If we replace g with h in T we get a tree T' which is also a MST and it differs from T by one edge. Hence T' minimal among all other trees differ from T by one edge, so it is a 'second best' by definition

P4
Assume that the di's are sorted decreasungly. Since n>=3, d1 >= 2, or else &sum;di <= n < 2n - 2 which contradicts the premise. Moreover, dn <= 1, or else &sum;di >= 2n > 2n - 2. But in fact d i >= 1 for all i's since we need to build a tree, and if di = 0 then the we have an isolated vertice and thus not a tree.

a proof by induction on n that such a tree exists: For n = 3, it is easily proved, the only possible tree is a chain, and the only possible d's, that match the case are 2+1+1 = 4 = 2*3-2. For n, and d1,...,dn : by the inductive assumption there's a Tree on n-1 vertices with degrees d1-1,...,dn-1, since: (d1-1) + d2 +...+ dn-1 = &sum;di - 1 - dn = 2n-2 - 2 = 2(n-1) - 2, (dn = 1 as explained above). Now if we join a new node to v1 by a new edge, we get a tree on n edges with degrees d1,...,dn which completes the proof.

An algorithm that builds such a tree: Build-Tree(n,d) //d is an n-array of ints that satisfy the premise »set a new tree T with a root r and d[1] leaves //T is a 'star' shaped tree i = 2 while (d[i] > 1): r = left-most child of r           »add d[i] - 1 new leaves to node r            i = i + 1 return T

The algorithm is correct: Let T be a tree that satisfies the property of the d's. We may consider T as rooted from v1. then v1 has d1 edges, from each one a new tree is rooted. Every vertice di other than the root v1 has a predecessor, and so there are only di-1 sub-trees spawning from it, and so it contributes di-1 new vertices for the tree, (and in case d=1 the vertice doesn't add new unaccounted for nodes to the tree). We can therefore count the vertices of T in the following way: the root v1 contributes d1 vertices including itself. each other vertice vi contributes di-1 new vertices (exluding itself which is accounted for as its predecessor's child). It follows that: $$ n = \sum_{d_i \ge 2}d_i = 2n-2 - \sum_{d_i = 1}d_i $$ But the tree built by the algorithm above satisfies exactly the same equasions, for it has exactly the same structure: the root contributes d[1] nodes + 1 for itself. each other node contributes d[i]-1 nodes accounting for its children. and &sum;d[i] = 2n-2 is a given, so the algorithm is correct.

Analysis of the running time: Initializing the tree takes O(1). Adding d[i]-1 new nodes requires &theta;(d[i]) time, and we know that the &sum;d[i] = 2n-2 and so the running time is &theta;(n).

P5
I assume that the tasks ti are sorted increasingle according to fi. I will show that there is an optimal solution that uses a greedy choice, that is- in each step it always a assign the first unassigned task to the first available processor, and so a new processor will be called for only for a task that is incompatible to the set of tasks in every previously called processor. proof: Assume that there's no optimal solution using a greedy choice, then for any solution F, there's an index j such that tasks t1,..., tj-1 were chosen according to the greedy principle and tj is the first to deviate from the greedy choice, that is: tj was assign to a new processor even though it is compatible with the set of tasks that finished earlier that was assigned to one of the processors. Let F = {A1, ..., Ak} be an optimal solution (each Ai is a set of compatible tasks), such that F has the maximal possible sequence of first greedy choices of all optimal soloutions, that is: let j be the first index such that task tj is assign to a new processor in F even though it's compatible with the tasks of smaller indexes (the tasks that are finished earlier) in one of the previously used processors, and j is the maximal such index for all optimal solutions. By assumption there's i<=k such that tj is compatible with the tasks in Ai that were assigned prior to tj, these are tasks that end prior to tj because the greedy process always assign the first remaining task to the first compatible set, including a new set if there are no compatible sets in existance. Let Al be the new set that tj is assigned to in F. Then if we assign tj to Ai, and assign all the tasks from Al to Ai, and move to Al all tasks from Ai that starts after ti has fininshed, We get a new optimal solution F': for it has the same number of sets (processors), and the new Ai and Ak+1 are compatible: The original Al is compatible and we move its tasks to Ai, but since the first task in Al which is tj is compatible with Ai then the new Ai is compatible (joining a compatible set that starts after the last task in another compatible set to that set preserves compatibility). The new Al is a subset of the original compatible Ai and so it too is compatible. So the new optimal solution has a longer sequence of greedy choices, in contradiction to the maximality of F, and so the assumption is false and there is a solution based on a greedy choice.

The algorithm, is therefore: Greedy-Allocator(s,f) n = length[s] push(A[1],1) //A[i] is a stack and initiallized as empty //if not empty, then top[A[i]] is last task assigned to A[i] k = 1 for (i=2 to n): i = 1 j = 1 while ( (NOT (s[top[A[k]] >= f[i])) AND j<=k): j = j + 1 //This will find the fist compatible set //or a new set if there are none push(A[j],i) if (j > k): k = j           j = 1

The correctness of the algorithm follows from the proof above: the algorithm goes through the tasks in the order of their termination time f[i], which is assumed to be sorted, if not than the array f should be sorted, and assigned task i to the first set A[i] compatible with it (by pushing i to the stack A[i]).

Analysis of the complexity: There will never be more than n processors used, so the nested while loop runs at O(n) per iteration of the for loop, and we get a total running time of O(n2) The while loop at the worst case, will require i iterations for each i, because for each i, a new set must be assigned, and so for each i it would, and so for each i it would check all the previous i-1 sets and find them incompatible the running time at the worst case is &theta;(1+2+...+n) = &theta;(n), this is the case when we have for example f[i] = f[j] for all i,j<=n. In this case any pair of tasks are incompatible and thus require deifferent processors. sorting the array f can be done in time O(nlgn) so iot wouldn't change the asymptotic time.

