## Gaussian Process Kernels

As I point out in http://www.jianping-lai.com/2017/03/10/guassian-process/, kernel can be decomposed into , where  and .

For linear kernel ,

%%%%%%%%%%%% SVD %%%%%%%%%%%%%
+0.00 +0.00 +0.00 +0.00 +0.00
-0.20 +0.00 -0.00 -0.00 +0.00
-0.40 +0.00 +0.00 -0.00 +0.00
-0.60 -0.00 -0.00 -0.00 +0.00
-0.80 +0.00 +0.00 +0.00 +0.00
%%%%%%%%% Cholesky %%%%%%%%%%%
+0.00 +0.00 +0.00 +0.00 +0.00
+0.00 +0.20 +0.00 +0.00 +0.00
+0.00 +0.40 +0.00 +0.00 +0.00
+0.00 +0.60 +0.00 +0.00 +0.00
+0.00 +0.80 +0.00 +0.00 +0.00

both the SVD and Cholesky decomposition leads to that






as we have . These results leads to a straight line.

%%%%%%%%%%%% SVD %%%%%%%%%%%%%
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
%%%%%%%%% Cholesky %%%%%%%%%%%
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1

This result is essentially saying

Difference between y's of two nearby data points  has a constant standard deviation of  (subscript represents for data points sequentially). This gives you a randomized but continuous data structure.

## Guassian Process

for Gaussian Process, SVD should be equivalent to Cholesky decomposition; when co-variance matrix is positive definite, they should be equal.

as random variable

where  is the covariance matrix and