from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(np.shape(x_train), np.shape(x_test))
(60000, 28, 28) (10000, 28, 28)
# 把三維的張量(tensor)轉換成二維的數值
x_train = x_train.reshape(60000, 28*28)
x_test = x_test.reshape(10000, 28*28)
print(np.shape(x_train), np.shape(x_test))
(60000, 784) (10000, 784)
x_train /= 255
x_test /= 255
# 設定每個數值是 float32 型的:gpu 可使用這些變數加速訓練
x_train = x_train.astype("float32")
x_test = x_test.astype("float32")
# 1 of k encoding: [0, 1, 2], y=1->(0, 1, 0), y=2->(0, 0, 1), y=0->(1, 0, 0)
from keras.utils import np_utils
num_classes = 10
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
print(y_train)
[[0. 0. 0. … 0. 0. 0.]
[1. 0. 0. … 0. 0. 0.]
[0. 0. 0. … 0. 0. 0.]
…[0. 0. 0. … 0. 0. 0.]
[0. 0. 0. … 0. 0. 0.]
[0. 0. 0. … 0. 1. 0.]]
以下為所設計的神經網路:
# 利用 keras 定義深度學習網路
from keras.models import sequential
from keras.layers import dense
from keras.optimizers import sgd
model = sequential()
# 新增了輸入層的資訊
model.add(dense(100, activation='sigmoid', input_shape=(784,))) # batch_size = 20, 30, 50, ...
# 新增第乙個隱含層的資訊
model.add(dense(100, activation='relu'))
# 新增第二個隱含層
model.add(dense(num_classes, activation='softmax'))
# 目標函式,優化演算法,評估方法(準確,auc)
model.compile(loss='categorical_crossentropy', optimizer=sgd(), metrics=['accuracy'])
# 訓練模型
model.fit(x_train, y_train, batch_size=50, epochs=10, verbose=1, validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=1)
print("test loss: ", score[0]) # 目標函式,損失函式
print("test accuracy: ", score[1]) # 正確率
結果:
instructions for updating:
use tf.cast instead.
train on 60000 samples, validate on 10000 samples
epoch 1/10
60000/60000 [******************************] - 5s 81us/step - loss: 0.7790 - acc: 0.7981 - val_loss: 0.4146 - val_acc: 0.8900
epoch 2/10
60000/60000 [******************************] - 3s 52us/step - loss: 0.3786 - acc: 0.8926 - val_loss: 0.3383 - val_acc: 0.9064
epoch 3/10
60000/60000 [******************************] - 3s 54us/step - loss: 0.3267 - acc: 0.9070 - val_loss: 0.3010 - val_acc: 0.9156
epoch 4/10
60000/60000 [******************************] - 3s 52us/step - loss: 0.3021 - acc: 0.9120 - val_loss: 0.2824 - val_acc: 0.9176
epoch 5/10
60000/60000 [******************************] - 3s 53us/step - loss: 0.2856 - acc: 0.9167 - val_loss: 0.2696 - val_acc: 0.9197
epoch 6/10
60000/60000 [******************************] - 3s 52us/step - loss: 0.2687 - acc: 0.9206 - val_loss: 0.2563 - val_acc: 0.9250
epoch 7/10
60000/60000 [******************************] - 3s 52us/step - loss: 0.2571 - acc: 0.9241 - val_loss: 0.2679 - val_acc: 0.9238
epoch 8/10
60000/60000 [******************************] - 3s 52us/step - loss: 0.2458 - acc: 0.9269 - val_loss: 0.2433 - val_acc: 0.9270
epoch 9/10
60000/60000 [******************************] - 3s 54us/step - loss: 0.2472 - acc: 0.9266 - val_loss: 0.2378 - val_acc: 0.9279
epoch 10/10
60000/60000 [******************************] - 3s 53us/step - loss: 0.2280 - acc: 0.9322 - val_loss: 0.2294 - val_acc: 0.9320
10000/10000 [******************************] - 0s 32us/step
test loss: 0.22936600434184073
test accuracy: 0.932
使用Keras來搭建VGG網路
上述vgg網路結構圖 vgg網路是在very deep convolutional network for large scale image recognition這篇 中提出,vgg是2014年被提出的,與之前的state of the art的網路結構,錯誤率大幅下降,並取得了ilsvrc20...
Keras搭建模型
好了,上 了。敲黑板!import keras import numpy as np from keras.utils import plot model import matplotlib.pyplot as plt 第乙個網路模型 輸入為16維 input1 keras.layers.input...
Keras搭建Classifier分類神經網路
剛剛開始學習keras,就在這裡記錄一下學習過程啦。本文為在keras環境下搭建classifier分類神經網路,並進行mnist手寫字元的識別。keras文件 from keras.utils import np utils from keras.models import sequential ...