本系列文章由
@yhl_leo
依照教程:深入mnist教程和deep mnist for experts(英文官網),測試**及結果如下:
# load mnist data
import input_data
mnist = input_data.read_data_sets("mnist_data/", one_hot=true)
# start tensorflow interactivesession
import tensorflow as tf
sess = tf.interactivesession()
# weight initialization
defweight_variable
(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.variable(initial)
defbias_variable
(shape):
initial = tf.constant(0.1, shape = shape)
return tf.variable(initial)
# convolution
defconv2d
(x, w):
return tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='same')
# pooling
defmax_pool_2x2
(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='same')
# create the model
# placeholder
x = tf.placeholder("float", [none, 784])
y_ = tf.placeholder("float", [none, 10])
# variables
w = tf.variable(tf.zeros([784,10]))
b = tf.variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,w) + b)
# first convolutinal layer
w_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# second convolutional layer
w_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# densely connected layer
w_fc1 = weight_variable([7*7*64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)
# dropout
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# readout layer
w_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, w_fc2) + b_fc2)
# train and evaluate the model
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.adagradoptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict=)
print
"step %d, train accuracy %g" %(i, train_accuracy)
train_step.run(feed_dict=)
print
"test accuracy %g" % accuracy.eval(feed_dict=)
其中各個操作的含義,文件裡講解的比較清楚,就不累述了,結果截圖:
可以看出,訓練結果準確率為93.22%,並不是教程裡說的99.2%~
(有讀者提議將步長修改更小,測試後效果仍然不佳)
將上述**中,訓練優化方法修改為梯度下降演算法:
訓練結果精度為:99.25%與教程中的結果一致。
深入MNIST code測試
依照教程 深入mnist教程和deep mnist for experts 英文官網 測試 及結果如下 load mnist data import input data mnist input data.read data sets mnist data one hot true start te...
深入手工測試
很多人都認為 手工測試沒有技術含量 做測試,還是自動化測試,效能測試比較有前途 等等。首先,我想的是,什麼是 技術含量 我覺得,一般指的有 技術含量 的,就是你能做別人不能做,或者你完成目標比別人快的多的事情,如果隨便乙個人很快能上手完成你所做的工作,就不算有技術含量。就好像只把偽 轉換為語言的程式...
深入測試環境管理
測試過程中,一套合理的環境管理流程是發布過程中很重要的一環。如何在測試過程中讓環境為你服務而不是在環境維護過程投入過多人力,其實還是挺重要的乙個工作。在現在網際網路模式下,微服務化架構盛行,毫不誇張的說,好的環境管理流程是事半功倍的。一般的網際網路公司環境分為三套 以上三種環境基本上都是網際網路公司...