User:Zelhar/Algorithms0 maman11

=Algorithms, Maman 11, Yiftah Kolb=

Excercize 1
/*/INSERTION-SORT(A) Original Form/*/ for j=2 to length[A] do    key=A[j] /*/ Insert A[j] into the sorted sequence A[1..j - 1]. i=j-1 while i>0 and A[i]>key do        A[i + 1]= A[i] i=i-1 A[i + 1]:=key

1.1
/*/RTL/*/ //start from the right and move left for j=length(A)-1 down to 1 do     key=A[j] i=j+1 //place A[j] in the sorted (up) sub-array A[j+1..n]     while i < length(A)+1 & A[i] 0 and A[i]key do     A[i-1]=A[i] i=i+1 A[i-1]=key

1.4
The Count is the same for all the versions because the range of the 'for' has the same length in all of them as do the maximum number of iterations of the 'while' loop, and they all have the same number of copy actions and (in)equality checks

/*/INSERTION-SORT(A) Original Form 					 Count: n=len(A) for j=2 to n do           {1 copy per cycle,totaling n-1 copies} key=A[j]                                              {ditto} // Insert A[j] into the sorted sequence A[1..j - 1]. i=j-1                                                 {ditto} //t(j):=1+number of iterations of the //'while' for a given input and j.   while i>0 and A[i]>key do            {2t(j), j=2..n}   //the above count has an exception: if i=0 A[i]is not checked and //we then have 2t(j)-1} A[i + 1]= A[i]                     {t(j)-1 copy, sum on all j's 2..n}     i=i-1                               {t(j)-1 copy, sum on all j's}   A[i + 1]=key                         {n-1 copy}

We have seen in the text book that at the worst case t(j)=j, and at best case j(j)=1 for all j's so: 2n - 2 =< E=Σ[2t(j)-1:j=2..n] <= n(n+1)-2-(n-1) = n^2 - 3
 * The total number of copy actions, C is:
 * 4(n-1) =< C=4(n-1)+Σ[2t(j)-2:j=2..n]<= 4(n-1) + n(n-1)
 * The total number of (in)equality checks, E:

1.5
/*/with -\infty/*/ A[0]=-\infty for j=2 to length[A] do                                          key=A[j] // Insert A[j] into the sorted sequence A[1..j - 1]. i=j-1 while A[i]>key do                                      A[i + 1]= A[i] i=i-1 A[i + 1]=key in this Case, C is the same as the original case, but the new E = Σ[t(j):j=2..n] and so: n<=E <= n(n+1)/2 - 1

2.1
function E(A,p,r):     /*/1<=p= n 	if A[r]-A[p]>=n & 1<=p<r<=length[A]: rturn TRUE else: return FALSE

2.2
function find(A) i=1 j=len(A) if E(A,i,j): //find k    k=(i+j)/2 //assuming a/b means floor(a/b) for integers //E(A,i,j) implies E(A,i,k)||E(A,k,j) while i<k:      if E(A,i,k): j=k else: i=k k=(i+j)/2 return A[i]+1 //at the end of the loop we have i=k, j=i+1, E(A,i,j), so    //A[i] key and j > 0: j=j-1 return j //if j==0 then the whole row is strictly greater than key

//step 2: scan for key in a RTL in-row order,but when the scans continues //in the next raw (say i+1), //it starts from the //j=split_row(M,i,j,key) column, //because M[i+1,j+1] >= M[i,j+1] > key

define function: scan(M,key) i=1 j=split_row(M,i,m,key) //M[i,1] is the minimum of the sub-matrix composed of the last n-i+1 rows, //so if j=o then M[i,1] and all the elements of rows i...n  //are strictly greater than key and so we need not look for key there while j>0 and n>i and key > M[i,j]: i=i+1 //search the next line, start from the column j,     j=split_row(M,i,j,key) //j can only stay the same or get smaller in every iteration if j>0 and n>=i and key==M[i,j]: return (i,j) else: return NULL

calculating the order of the computation time
split_row(M,i,j,key) is $$\Theta \left ( j \right )$$: iteration. since j is not allowed to descend bellow 0 we get a worst case upper bound of:
 * since the while starts at the input value of j which is decreased by 1 in every
 * $$\mathrm{O} \left (j \right )$$

j reaches 0. hence we get lower bound for worst case of:
 * if M[i,k]>key for all k=1..j, the loop would iterate j times and terminate when
 * $$\Omega \left (j \right )$$

for a given input,the time cost of split_row is c*(1+number of iterations performed) (where c is constant which does not depend on the input)

As for the complete algorithm ('scan' function), The calling for split_line at the beginning contributes Θ(m) to the time cost. as for the While loop with its nested split_row (in effect another while): for every i in [1...n], let k(i) be the number of iterations performed by the calling of split_row(M,i,j,key) (requiring time of c*(1+k(i)) to complete) j=split_row(M,i,j,key). split_raw is a non-increasing function of j, let j(0):=m, and for i>=0 let j(i):={the value of j at the beginning of iteration i}. if k(i)=x then j(i+1)=j(i)-k(i), since every iteration decreases j(i) by 1.

The time cost of every iteration of the main loop (in 'scan'), is c*(1+k(i)), determined by the cost of the nested loop ,'split_line'. Summing all we get:
 * $$\sum_{i=1}^n[1+k(1)]\ = \sum_1^n[1+j(i+1)-j(i)]\ \le n+ m$$

So at the worst case, the loop repeats n times at a cost of $$\le$$ c*(n+m). This bound is tight: if all the values in the input matrix are strictly smaller than key, then j(1)=m (before the main loop starts). The main loop then repeats n times, and the nested calling of splits completes 0 nested iterations in each main-loop cycle, totaling the running time at c*(n+m). Therefore The algorithm has a worst case running time of: $$\Theta(n+m)$$

3.2
For convenience, assume: $$\forall 0 \le j \le n\ M[n+1,j] = \infty \land\ M[j,0]= -\infty$$ And Use the same notation j(i) as above: J(0):=m, j(i+1):=split_line(M,i,j(i),key)=max{j: j(i)>=j and key>=M[i,j]}. lemma 3.1: j(i+1)=max{j: m>=j and key>=M[i,j}. Proof:
 * Assuming that it is not, let i be the smallest line for which the claim fails. since j(0):=m the claims is true for
 * j(1), so i>0. we assume there is x>j(i) such that key>=M[i,x]. if j(i)>=x then j(i+1)=split_line(M,i,j(i),key]>=x
 * because the algorithm would search in cell [i,x] before [i,j(i+1)]. so it must be that x>j(i)>=j(i+1). by the minimality
 * of i, j(i)=max{j: m>=j and key>=M[i-1,j}. hence M[i-1,j(i)+1]>key, but since M is sorted we have also:
 * M[i,x]>=M[i-1,j(i)+1]>key, contradicting the assumption, thus proving the claim.

Claim 3.2: The Algorithm is correct: Proof:


 * if key is an element of the matrix: let x:=min{i:M[i,j]=key, j=1...m}
 * and y:=max{j:M[x,j]=key}. The algorithms will keep running until it reaches line x because for i=1..x-1
 * j(i+1)>0 or else key=M[x,y]>=M[i,1)]>key which is absurd, and key>M[i,j(i+1)] because equality is impossible
 * by minimality of x. when it reaches line x, by lemma 3.1 j(x)=max{j:M[x,j]=key}=y. Since M[x,y]=key the algorithm
 * will stop at the start of the next iteration, and return (x,y) which is correct.


 * if key is not an element of the matrix: since for i=1...n j(i+1)= j(i)=max{j: m>=j and key>=M[i,j} and since
 * M[i,j]!=key for all i,j, we have key>M[i,j(i+1)] for all possible i's,
 * Thus the algorithm will stop either at the first i: j(i)=0
 * or when i=n+1,(i is increased by 1 in every iteration so the algorithm must terminate).
 * in either case it returns NULL which is correct.

3.3
//Build M of size n^2 with elements M(i,j)=A[i]+A[j] define function find_sum(A,key) for i=1...n    for j=1...n     	M[i,j]=A[i]+A[j] //both loops complete exactly n cycles, on is nested inside the other, //the running time is therefore Θ(n^2). z=scan(M,key) //by ex3.2 scan has running time Θ(n) so the running time for //the algorithm is Θ(n^2). if z==NULL: return NULL else: return z //z is (i,j) such that A[i]+A[j]=M[i,j]=key

4.1
Clearly the constant functions f=g=1 satisfies the definitions for all n's and c=1.

4.2
Impossible: by f=o(g) we have: $$\exists N \forall n>N\ 0 \le f(n) < g(n) $$ and by g=o(f): $$\exists M \forall n>M\ 0 \le g(n) < f(n) $$ and so for $$max\{M,N\}$$ we get a contradiction: $$\forall n>max\{M,N\}\ 0 \le f(n) < g(n) =f and so f=ω(f+g) implies f=ω(f) (because f+g>=f) which is impossible by 4.3.

4.5
f=Θ(1) iff f is bounded by a constant c. Let g:=0 f:=1=f+g, then f=Θ(1) and g=0=o(1).

5.1
T(n)=5T(n/5)+n $$log_5(5) = 1$$ so we have $$f(n) = \Theta \left ( n^{log_5 5} \right ) $$ and by case2 of the master theorem T(n)=Θ(n*lg(n)).

5.2
T(n)=17T(n/4)+n^3 $$16=4^2<17<64=4^3 \to\ 2 0 f(n)=n^3 \ge n^{log_4{17}-\epsilon} $$ in particular $$f(n)=\Omega \left ( n^{log_4{17}-\epsilon} \right )$$, f(n)=n^3 satisfies also $$\forall n \ge 1\ 17(n/4)^3 \le 1/2(n^3)$$ so by case 3 of the master theorem T(n)=Θ(f(n)).

5.3
T(n)=32T(n/2)+(n^2)lg^5n lg32=5>3 for ε=1 n^(5-ε)=O(n^3)=O((n^2)lg^5(n)) and by case 1 of the theorem T(n)=Θ(n^lg32)=Θ(n^5)

5.4
T(n)=3T(n/3)+lg(n!) using sterling's estimation: $$lg (n/e)^n lg n! \le lg n^n = nlg n\ \to\ lg n! = \Theta(nlg n)$$ I will show that $$T(n)=\mathrm{O} (nlg^2 n)$$ by the substitution method: assume $$\forall k =1 and large enough to satisfy the boundary conditions. The only problem is for n=1. assuming T(1)=T(2)=T(3)=T(4)=const : for k>=2 take c: clg^2k>=c>=max{T(1),1} for k>=5 the recurence doesn't depend directly on T(1).

Again by the substitution method, a proof of T(n)>=cnlg^2 n: (d=exp-1) $$T(n) \ge c(n/3)lg^{2}(n/3) + dnlg n\ =\ cnlg^{2}n +nlg n (d-2clg3)\ -\ lg^2 3$$ $$c \le d/(lg3)\ \to\ T(n) \ge cnlg^2 n$$ and for small enough c, T(1)=T(2)=..=T(4)=const>=clg^2 4=4c

combining both proofs we get T(n)=Θ(nlg^2n)

5.5
T(n)=4T(n/8)+20407n^(1/2) $$log_8 4\ =\ 2/3 = 1/2 + 1/6$$ and since $$n^{2/2}\ = O(n^{2/3-1/6})$$ we have by case 1 of the master theorem $$T(n/) = \Theta(n^{2/3})$$

5.6
T(n)=5T(n^(1/5))+lglgn change of variables: set $$m = log_{5}n\ S(m):= T(5^m)$$ and we have: $$S(m) = T(5^m) = 5T(5^{m/5})\ + lg lg 5^m\ = 5S(m/5)\ + lg m\ + lg 5$$ $$lg m\ + lg 5\ =\ \Theta(m^{log_{5}5}) $$ so by the master theorem S(m)=θ(m) Hence T(n)=θ(logn)