Créer un dossier "boston" dedans:
Télécharger le jeu de données (csv) fichier d'ici: https://github.com/shunakanishi/keras_boston_dataset
Déplacer "housing.csv" vers le dossier "boston":
Les données dedans représente ça:
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD |
---|---|---|---|---|---|---|---|---|
0.00632 | 18 | 2.31 | 0 | 0.538 | 6.575 | 65.2 | 4.0900 | 1 |
0.02731 | 0 | 7.07 | 0 | 0.469 | 6.421 | 78.9 | 4.9671 | 2 |
0.02729 | 0 | 7.07 | 0 | 0.469 | 7.185 | 61.1 | 4.9671 | 2 |
0.03237 | 0 | 2.18 | 0 | 0.458 | 6.998 | 45.8 | 6.0622 | 3 |
0.06905 | 0 | 2.18 | 0 | 0.458 | 7.147 | 54.2 | 6.0622 | 3 |
TAX | PTRATIO | B | LSTAT | MEDV |
---|---|---|---|---|
296 | 15.3 | 396.90 | 4.98 | 24.0 |
242 | 17.8 | 396.90 | 9.14 | 21.6 |
242 | 17.8 | 392.83 | 4.03 | 34.7 |
222 | 18.7 | 394.63 | 2.94 | 33.4 |
222 | 18.7 | 396.90 | 5.33 | 36.2 |
Data cited from: http://neupy.com/2015/07/04/boston_house_prices_dataset.html
Maintenant créer "boston.py" dans le dossier partagé et écrire dedans:
import pandas as pd
import numpy as np
# Read dataset into X and Y
df = pd.read_csv('./data/boston/housing.csv', delim_whitespace=True, header=None)
dataset = df.values
X = dataset[:, 0:13]
Y = dataset[:, 13]
# Define the neural network
from keras.models import Sequential
from keras.layers import Dense
def build_nn():
model = Sequential()
model.add(Dense(20, input_dim=13, activation='relu', kernel_initializer="normal"))
# No activation needed in output layer (because regression)
model.add(Dense(1, kernel_initializer="normal"))
# Compile Model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# Evaluate model (kFold cross validation)
from keras.wrappers.scikit_learn import KerasRegressor
# sklearn imports:
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# Before feeding the i/p into neural-network, standardise the dataset because all input variables vary in their scales
estimators = []
estimators.append(('standardise', StandardScaler()))
estimators.append(('multiLayerPerceptron', KerasRegressor(build_fn=build_nn, nb_epoch=100, batch_size=32, verbose=1)))
pipeline = Pipeline(estimators)
kfold = KFold(n=len(X), n_folds=10)
results = cross_val_score(pipeline, X, Y, cv=kfold)
print ("");
print ("Mean: ", results.mean())
print ("StdDev: ", results.std())
import numpy as np
# Read dataset into X and Y
df = pd.read_csv('./data/boston/housing.csv', delim_whitespace=True, header=None)
dataset = df.values
X = dataset[:, 0:13]
Y = dataset[:, 13]
# Define the neural network
from keras.models import Sequential
from keras.layers import Dense
def build_nn():
model = Sequential()
model.add(Dense(20, input_dim=13, activation='relu', kernel_initializer="normal"))
# No activation needed in output layer (because regression)
model.add(Dense(1, kernel_initializer="normal"))
# Compile Model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# Evaluate model (kFold cross validation)
from keras.wrappers.scikit_learn import KerasRegressor
# sklearn imports:
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# Before feeding the i/p into neural-network, standardise the dataset because all input variables vary in their scales
estimators = []
estimators.append(('standardise', StandardScaler()))
estimators.append(('multiLayerPerceptron', KerasRegressor(build_fn=build_nn, nb_epoch=100, batch_size=32, verbose=1)))
pipeline = Pipeline(estimators)
kfold = KFold(n=len(X), n_folds=10)
results = cross_val_score(pipeline, X, Y, cv=kfold)
print ("");
print ("Mean: ", results.mean())
print ("StdDev: ", results.std())
Faire ce commande:
$ sudo python3.5 boston.py
L'apprentissage profond va démarrer avec les données de housing.csv:
Moyenne: 478.48
Écart-type: 258.5499
Aucun commentaire:
Enregistrer un commentaire