## Water Jug Problem

only thing we should proof is this:

then for a GCD g, from gx and gy,
we can and only can reach every z in {i * g | i in [0, x + y] }

now, let's see how to proof (1).
let x be the less one, and y the greater one.
then fill the two jug to full, we have x and y water each and x + y water in total.
then we pour out x water each time until we can't.

now we have these different z:

finally we have y % x water left, we pour it into the x jug,
then fill the y jug to full.
now the two jugs have y % x and y water separately,
and y + y % x water in total.
then we pour from y jug into x jug until the x jug is full,
afterwards do the same thing like before,
to pour out x water each time until we can't.

finally we get (y + y % x) % x = (y % x + y % x) % x = (2y) % x water left.

now we have these different z:

do this x times, we get z:

then you see (xy) % x = 0, and

set { y % x, (2y) % x, (3y) % x, ... , ((x-1)y) % x } just equals to { 1, 2, 3, 4, 5, ... , x - 1 } . (2)

proof for (2):
modulo x could get x - 1 different results at most exclusive 0, that's 1,2,3,...,x-1.
we have x - 1 expressions, suppose there is two same,
let a != b in [1, x-1] and (ay) % x = (by) % x,
then we get ((a - b)y) % x = 0,
then ((a - b) % x) * (y % x) = 0(a - b) % x = 0.
for 1 <= a, b <= x - 1, so we get a = b. it's impossible.

so we can get any z in [0, x + y] water for coprime x and y.

## Algorithm list

Insert O(log(n))
delete O(log(n))

###### Array Shuffle:

Fisher-Yates algorithm

To prove this algorithm, you simply need to calculate that for each element in original array position i, the chance is 1/len(array) to be at any position j in the shuffled array

###### Water Jug Problem:

(quoted from ShuangquanLi on leetcode)

###### Convex Hull:

Jarvis’s Algorithm O(n^2)
Monotone chain O(nlog(n)) (primarily the complexity is from sorting the points)

## Time Complexity

Algorithm Time Complexity Space Complexity
Best Average Worst Worst
Quicksort Ω(n log(n)) Θ(n log(n)) O(n^2) O(log(n))
Mergesort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(n)
Timsort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
Heapsort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(1)
Bubble Sort Ω(n) Θ(n^2) O(n^2) O(1)
Insertion Sort Ω(n) Θ(n^2) O(n^2) O(1)
Selection Sort Ω(n^2) Θ(n^2) O(n^2) O(1)
Tree Sort Ω(n log(n)) Θ(n log(n)) O(n^2) O(n)
Shell Sort Ω(n log(n)) Θ(n(log(n))^2) O(n(log(n))^2) O(1)
Bucket Sort Ω(n+k) Θ(n+k) O(n^2) O(n)
Radix Sort Ω(nk) Θ(nk) O(nk) O(n+k)
Counting Sort Ω(n+k) Θ(n+k) O(n+k) O(k)
Cubesort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
BFS Ω(|V|+|E|)

## Path Finding Algorithm

Path finding algorithm can be found in advanced interview questions. It is necessary to know those algorithms, when you are interviewing companies like Google.

Dijkstra's algorithm

It starts with the origin node and search all its neighboring, and assign a distance from each of the node to the origin; then iterating through all the new nodes, searching for all of its neighboring (even if the new neighboring has been visited before). Keep the shortest distance to the origin.  (see the wiki gif image for details).

Here two stacks are kept, one for the nodes to be checked and one for the nodes should never be added to the "to be  stack". However, this is not necessary, as if you check a previously visited nodes that should not be checked again, you will not change its distance value and it will not be added back to "to be stack".  To keep the path, you need to know, which parent node yield the lowest distance for each node.

A* algorithm

This one is a modified version from Dijkstra's algorithm. Here you have a estimate distance from origin to destination H(n) and the value V(n) stored in each node is the sum of F(n) distance to origin and H(n). Then the next node to be started is the node with lowest V(n). Thus,  A* does not search all the nodes and does not guarantee the best solution. See the youtube video for a straight forward example.

## find the greatest common factor algorithm

Suppose we are trying to find the greatest common factor between two numbers A and B, with A>B and A%B=R.

If we have a common factor C, so that A%C=0 and B%C=0

As B%C=0,  .

So we know, if exists C, A%C=0 and B%C=0, with A%B=R, then R%C=0. The reverse if also true. Thus GCD(A,B)GCD(B,R).

With this idea, what we do is that, with A and B, and A>B. If A%B!=0, then we assign A%BA. The new A is smaller than B. Then if B%A!=0, then we do the same B%AB. Repeat the same procedure until A%B or B%A ==0. In either case, the divider is the greatest common factor.