Anaconda virtual environment setup python 2, 3 and R

After install anaconda 3

 

Tree Based Models

Regression tree

Xgboost manages only numeric vectors. So is decision trees in Sklearn

What to do when you have categorical data?

Conversion from categorical to numeric variables

Adaboost (Adaptive boost) http://machinelearningmastery.com/boosting-and-adaboost-for-machine-learning/

parameters for Xgboost

Complete Guide to Parameter Tuning in XGBoost (with codes in Python)

reg:linear simply square loss function
reg:logistic see logistic regression loss function
https://stats.stackexchange.com/questions/229645/why-there-are-two-different-logistic-loss-formulation-notations/231994#231994

objective function vs eval_metric

https://stackoverflow.com/questions/34178287/difference-between-objective-and-feval-in-xgboost

Use neuronetwork(tensorflow) for regression 2

Alright, I also try to see what if only features (x) are normalized, y range in previous formula is too close to 1. So I change the formula to be

Then the calculation is repeated for the ones with and without normalized y's. The results and conclusions are obvious. Yes, you definitely want to do that.


Use neuronetwork(tensorflow) for regression 1

After discussion with Mengxi Wu, the poor prediction rate may come from the reason that the inputs are not normalized. Mengxi also points out that the initialization of weights may need to be related to the size of the input for each hidden neuron. In http://stats.stackexchange.com/questions/47590/what-are-good-initial-weights-in-a-neural-network, it points out that the weights needs to be uniformly initialized between , where is the number of inputs to a given neuron.

The first step of modification is normalizing all inputs to its std equals 1.

The training results is consistently above 96% and is obviously better than previous ones without normalization.


This comparison is repeated for hidden =[20,30,40,50,60,80,100], the prediction rate is more converged with larger numbers of hidden neurons. And the same plot is made without normalization.

Later on, we added weights initialization using Xaiver method. The results are also attached.

Use neuronetwork(tensorflow) for regression 0

This post is not a tutorial, but rather a logbook of what we attempted.

The learning logbook starts with using neutonetwork to do regression.

The data is manually generate using a very simple formula.

, initially, we do not add any noise term.

accuracy function is below

In the first attempt, a single intermediate layer neuronetwork is applied.

It is observed that, with 60 hidden neurons and 25000 training steps, the prediction accuracy can be largely fluctuated according to the initialization value.


Fig 1. Prediction results with 60 hidden neurons, repeated 50 times.

Gaussian Process Kernels

As I point out in http://www.jianping-lai.com/2017/03/10/guassian-process/, kernel can be decomposed into , where and .

For linear kernel ,

%%%%%%%%%%%% SVD %%%%%%%%%%%%%
+0.00 +0.00 +0.00 +0.00 +0.00
-0.20 +0.00 -0.00 -0.00 +0.00
-0.40 +0.00 +0.00 -0.00 +0.00
-0.60 -0.00 -0.00 -0.00 +0.00
-0.80 +0.00 +0.00 +0.00 +0.00
%%%%%%%%% Cholesky %%%%%%%%%%%
+0.00 +0.00 +0.00 +0.00 +0.00
+0.00 +0.20 +0.00 +0.00 +0.00
+0.00 +0.40 +0.00 +0.00 +0.00
+0.00 +0.60 +0.00 +0.00 +0.00
+0.00 +0.80 +0.00 +0.00 +0.00

both the SVD and Cholesky decomposition leads to that




as we have . These results leads to a straight line.

%%%%%%%%%%%% SVD %%%%%%%%%%%%%
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
%%%%%%%%% Cholesky %%%%%%%%%%%
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1

This result is essentially saying

Difference between y's of two nearby data points has a constant standard deviation of (subscript represents for data points sequentially). This gives you a randomized but continuous data structure.