tensorflow通過宣告優化函式(optimization function)來實現,一旦宣告好優化函式,tensorflow將通過它在計算圖中解決反向傳播的項。當傳入資料,最小化損失函式,tensorflow會在計算圖中根據狀態相應的調節變數。
#-*- coding:utf-8 -*-
import tensorflow as tf
import numpy as np
sess = tf.session()
x_vals = np.random.normal(1, 0.1, 100)
y_vals = np.repeat(10.0, 100)
x_data = tf.placeholder(shape=[1], dtype=tf.float32)
y_target = tf.placeholder(shape=[1], dtype=tf.float32)
a = tf.variable(tf.random_normal(shape=[1]))
my_output = tf.multiply(x_data, a)
loss = tf.square(my_output - y_target)
init = tf.global_variables_initializer()
sess.run(init)
#declare opt
my_opt = tf.train.gradientdescentoptimizer(learning_rate=0.02)
train_step = my_opt.minimize(loss)
#training
for i in range(100):
rand_index = np.random.choice(100)
rand_x = [x_vals[rand_index]]
rand_y = [y_vals[rand_index]]
sess.run(train_step, feed_dict=)
if (i+1)%25 == 0:
print('step #' + str(i + 1) + 'a = ' + str(sess.run(a)))
print('loss = ' + str(sess.run(loss, feed_dict=)))
#step #25a = [6.326008]
#loss = [7.8349257]
#step #50a = [8.514299]
#loss = [2.130918]
#step #75a = [9.472592]
#loss = [1.8418382]
#step #100a = [9.754903]
#loss = [0.6592515]
#-*- coding:utf-8 -*-
import tensorflow as tf
import numpy as np
sess = tf.session()
x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(3, 1, 50)))
y_vals = np.concatenate((np.repeat(0.0, 50), np.repeat(1.0, 50)))
x_data = tf.placeholder(shape=[1], dtype=tf.float32)
y_target = tf.placeholder(shape=[1], dtype=tf.float32)
a = tf.variable(tf.random_normal(mean=10, shape=[1]))
my_output = tf.add(x_data, a)
my_output_expanded = tf.expand_dims(my_output, 0)
y_target_expanded = tf.expand_dims(y_target, 0)
init = tf.global_variables_initializer()
sess.run(init)
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output_expanded, labels=y_target_expanded)
my_opt = tf.train.gradientdescentoptimizer(0.05)
train_step = my_opt.minimize(xentropy)
for i in range(1400):
rand_index = np.random.choice(100)
rand_x = [x_vals[rand_index]]
rand_y = [y_vals[rand_index]]
sess.run(train_step, feed_dict=)
if (i+1) % 200 == 0:
print('step #' + str(i+1) + 'a = '+ str(sess.run(a)))
print('loss = ' + str(sess.run(xentropy, feed_dict=)))
#step #200a = [3.1671317]
#loss = [[1.5945358]]
#step #400a = [0.42701474]
#loss = [[0.25861406]]
#step #600a = [-0.7353596]
#loss = [[0.30368462]]
#step #800a = [-1.0829395]
#loss = [[0.1167491]]
#step #1000a = [-1.1390519]
#loss = [[0.8808856]]
#step #1200a = [-1.0290167]
#loss = [[0.2420652]]
#step #1400a = [-1.1319011]
#loss = [[0.23258743]]
學習率比較
優缺點及應用
大學習率
收斂慢,但是結果精確,適用於演算法穩定的時候
小學習率
收斂快,但是結果不精確,適用於演算法收斂太慢的時候
tensorflow實戰 反向傳播
windows10 anaconda3 64位 batch size 8 每次訓練的資料量 seed 23455 隨機種子 rng np.random.randomstate seed x rng.rand 32,2 產生32行2列的隨機矩陣 y int x0 x1 1 for x0,x1 in x...
TensorFlow實現MNIST反向傳播
coding utf 8 import tensorflow as tf from tensorflow.examples.tutorials.mnist import input data defsigmaprime x 用sigmoid函式的導數更新權重 param x return 更新後的權...
TensorFlow實現反向傳播演算法
反向傳播 bpn 演算法是神經網路中研究最多 使用最多的演算法之一,它用於將輸出層中的誤差傳播到隱藏層的神經元,然後用於更新權重。學習 bpn 演算法可以分成以下兩個過程 正向傳播 輸入被饋送到網路,訊號從輸入層通過隱藏層傳播到輸出層。在輸出層,計算誤差和損失函式。反向傳播 在反向傳播中,首先計算輸...