Google Cloud Machine Learning

1. Create an Google Compute Engine Instance
2. Activate the Google Machine Learning API under this project
3. Create an Google Storage Bucket
4. Login to the Google Compute Engine, create a folder for the ML project, in my case I called it MLtest. Inside this folder, two basic configuration files are required.

a. config.yaml

It is important to set "runtimeVersion" to be the latest, otherwise, some functions may not be available.

b. setup.py

5. The file inside the folder shows the structure blow.

to run a training module LOCALLY, such as 8-output.py, you type

In order to submit the job to Google Cloud ML, the following command is required,

However, users will not be able to read and write directory to the cloud storage bucket. As this being said, below commands will NOT work.

File IO can to be handle through

Similarly, you can read a file through the same way.

6. To check the current status of jobs,

 

Anaconda virtual environment setup python 2, 3 and R

After install anaconda 3

 

Tree Based Models

Regression tree

Xgboost manages only numeric vectors. So is decision trees in Sklearn

What to do when you have categorical data?

Conversion from categorical to numeric variables

Adaboost (Adaptive boost) http://machinelearningmastery.com/boosting-and-adaboost-for-machine-learning/

parameters for Xgboost

Complete Guide to Parameter Tuning in XGBoost (with codes in Python)

reg:linear simply square loss function
reg:logistic see logistic regression loss function
https://stats.stackexchange.com/questions/229645/why-there-are-two-different-logistic-loss-formulation-notations/231994#231994

objective function vs eval_metric

https://stackoverflow.com/questions/34178287/difference-between-objective-and-feval-in-xgboost

Use neuronetwork(tensorflow) for regression 2

Alright, I also try to see what if only features (x) are normalized, y range in previous formula is too close to 1. So I change the formula to be

Then the calculation is repeated for the ones with and without normalized y's. The results and conclusions are obvious. Yes, you definitely want to do that.


Use neuronetwork(tensorflow) for regression 1

After discussion with Mengxi Wu, the poor prediction rate may come from the reason that the inputs are not normalized. Mengxi also points out that the initialization of weights may need to be related to the size of the input for each hidden neuron. In http://stats.stackexchange.com/questions/47590/what-are-good-initial-weights-in-a-neural-network, it points out that the weights needs to be uniformly initialized between , where is the number of inputs to a given neuron.

The first step of modification is normalizing all inputs to its std equals 1.

The training results is consistently above 96% and is obviously better than previous ones without normalization.


This comparison is repeated for hidden =[20,30,40,50,60,80,100], the prediction rate is more converged with larger numbers of hidden neurons. And the same plot is made without normalization.

Later on, we added weights initialization using Xaiver method. The results are also attached.

Use neuronetwork(tensorflow) for regression 0

This post is not a tutorial, but rather a logbook of what we attempted.

The learning logbook starts with using neutonetwork to do regression.

The data is manually generate using a very simple formula.

, initially, we do not add any noise term.

accuracy function is below

In the first attempt, a single intermediate layer neuronetwork is applied.

It is observed that, with 60 hidden neurons and 25000 training steps, the prediction accuracy can be largely fluctuated according to the initialization value.


Fig 1. Prediction results with 60 hidden neurons, repeated 50 times.