Category: Algorithm
Other Algorithm
Water Jug Problem
only thing we should proof is this:
1 2 
<span class="hljskeyword">if</span> x <span class="hljskeyword">and</span> y are coprime, <span class="hljskeyword">then</span> we can <span class="hljskeyword">and</span> only can reach every integer z <span class="hljskeyword">in</span> [<span class="hljsnumber">0</span>, x + y]. (<span class="hljsnumber">1</span>) 
then for a GCD g
, from gx
and gy
,
we can and only can reach every z
in {i * g  i in [0, x + y] }
now, let’s see how to proof (1).
let x be the less one, and y the greater one.
then fill the two jug to full, we have x
and y
water each and x + y
water in total.
then we pour out x water each time until we can’t.
now we have these different z:
1 2 
<span class="hljsattribute">y</span> + x, y, y  x, y  2x, ... , y % x 
finally we have y % x
water left, we pour it into the x jug,
then fill the y jug to full.
now the two jugs have y % x
and y
water separately,
and y + y % x
water in total.
then we pour from y jug into x jug until the x jug is full,
afterwards do the same thing like before,
to pour out x water each time until we can’t.
finally we get (y + y % x) % x = (y % x + y % x) % x = (2y) % x
water left.
now we have these different z:
1 2 
<span class="hljsattribute">y</span> + y % x, y + y % x  x, y + y % x  2x, ... , (<span class="hljsnumber">2y</span>) % x 
do this x times, we get z:
1 2 3 4 5 6 
<span class="hljsattribute">y</span> + (<span class="hljsnumber">2y</span>) % x, y + (<span class="hljsnumber">2y</span>) % x  x, y + (<span class="hljsnumber">2y</span>) % x  2x, ..., (<span class="hljsnumber">3y</span>) % x : : : y + ((x<span class="hljsnumber">1</span>)y) % x, y + ((x<span class="hljsnumber">1</span>)y) % x  x, y + ((x<span class="hljsnumber">1</span>)y) % x  2x, ... , (xy) % x 
then you see (xy) % x = 0
, and
set { y % x, (2y) % x, (3y) % x, ... , ((x1)y) % x }
just equals to { 1, 2, 3, 4, 5, ... , x  1 }
. (2)
proof for (2):
modulo x could get x – 1 different results at most exclusive 0, that’s 1,2,3,…,x1.
we have x – 1 expressions, suppose there is two same,
let a != b in [1, x1] and (ay) % x = (by) % x
,
then we get ((a  b)y) % x = 0
,
then ((a  b) % x) * (y % x) = 0
, (a  b) % x = 0
.
for 1 <= a, b <= x  1
, so we get a = b
. it’s impossible.
so we can get any z in [0, x + y] water for coprime x and y.
Algorithm list
Binary Search Tree:
Insert O(log(n))
delete O(log(n))
Array Shuffle:
To prove this algorithm, you simply need to calculate that for each element in original array position i, the chance is 1/len(array) to be at any position j in the shuffled array
Graph:
Detect Cycle in a Directed Graph O(v+n)
Detect cycle in an undirected graph O(v+n)
Dijkstra’s algorithm
A* algorithm
Water Jug Problem:
(quoted from ShuangquanLi on leetcode)
Convex Hull:
Jarvis’s Algorithm O(n^2)
Monotone chain O(nlog(n)) (primarily the complexity is from sorting the points)
Time Complexity
Algorithm  Time Complexity  Space Complexity  

Best  Average  Worst  Worst  
Quicksort  Ω(n log(n)) 
Θ(n log(n)) 
O(n^2) 
O(log(n)) 
Mergesort  Ω(n log(n)) 
Θ(n log(n)) 
O(n log(n)) 
O(n) 
Timsort  Ω(n) 
Θ(n log(n)) 
O(n log(n)) 
O(n) 
Heapsort  Ω(n log(n)) 
Θ(n log(n)) 
O(n log(n)) 
O(1) 
Bubble Sort  Ω(n) 
Θ(n^2) 
O(n^2) 
O(1) 
Insertion Sort  Ω(n) 
Θ(n^2) 
O(n^2) 
O(1) 
Selection Sort  Ω(n^2) 
Θ(n^2) 
O(n^2) 
O(1) 
Tree Sort  Ω(n log(n)) 
Θ(n log(n)) 
O(n^2) 
O(n) 
Shell Sort  Ω(n log(n)) 
Θ(n(log(n))^2) 
O(n(log(n))^2) 
O(1) 
Bucket Sort  Ω(n+k) 
Θ(n+k) 
O(n^2) 
O(n) 
Radix Sort  Ω(nk) 
Θ(nk) 
O(nk) 
O(n+k) 
Counting Sort  Ω(n+k) 
Θ(n+k) 
O(n+k) 
O(k) 
Cubesort  Ω(n) 
Θ(n log(n)) 
O(n log(n)) 
O(n) 
BFS 

Ω(V+E) 


Path Finding Algorithm
Path finding algorithm can be found in advanced interview questions. It is necessary to know those algorithms, when you are interviewing companies like Google.
Dijkstra’s algorithm
It starts with the origin node and search all its neighboring, and assign a distance from each of the node to the origin; then iterating through all the new nodes, searching for all of its neighboring (even if the new neighboring has been visited before). Keep the shortest distance to the origin. (see the wiki gif image for details).
Here two stacks are kept, one for the nodes to be checked and one for the nodes should never be added to the “to be stack”. However, this is not necessary, as if you check a previously visited nodes that should not be checked again, you will not change its distance value and it will not be added back to “to be stack”. To keep the path, you need to know, which parent node yield the lowest distance for each node.
A* algorithm
This one is a modified version from Dijkstra’s algorithm. Here you have a estimate distance from origin to destination H(n) and the value V(n) stored in each node is the sum of F(n) distance to origin and H(n). Then the next node to be started is the node with lowest V(n). Thus, A* does not search all the nodes and does not guarantee the best solution. See the youtube video for a straight forward example.
find the least common multiple
The idea of finding the least common multiple is based on finding the greatest common divider.
LCM(A,B) = A*B/GCD(A,B)
1 2 3 
def LCM(A,B): return A*B/GCD(A,B) 
find the greatest common factor algorithm
Suppose we are trying to find the greatest common factor between two numbers A and B, with A>B and A%B=R.
If we have a common factor C, so that A%C=0 and B%C=0
$$!\textrm{A=N*B+R} \Rightarrow \textrm{A%C=(N*B+R)%C}$$
As B%C=0, $$\textrm{(N*B+R)%C=N*(B%C)+R%C=N*0+R%C=R%C}$$.
So we know, if exists C, A%C=0 and B%C=0, with A%B=R, then R%C=0. The reverse if also true. Thus GCD(A,B)$$\equiv$$GCD(B,R).
With this idea, what we do is that, with A and B, and A>B. If A%B!=0, then we assign A%B$$\rightarrow$$A. The new A is smaller than B. Then if B%A!=0, then we do the same B%A$$\rightarrow$$B. Repeat the same procedure until A%B or B%A ==0. In either case, the divider is the greatest common factor.
1 2 3 4 5 6 7 8 9 10 
def GCD(n,m): A,B = max(n,m),min(n,m) while A%B!=0: R = A%B A=B B=R return B 