import tensorflow as tf
import numpy as np
defadd_layer
(inputs, in_size, out_size, activation_function=none):
weights = tf.variable(tf.random_normal([in_size,out_size]))
biases = tf.variable(tf.zeros([1,out_size]))
wx_plus_b = tf.matmul(inputs,weights) + biases
# 矩陣的廣播機制
if activation_function is none:
outputs = wx_plus_b
else:
outputs = activation_function(wx_plus_b)
return outputs
x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise = np.random
.normal(0,0.05,x_data.shape)
y_data = np.square(x_data) - 0.5 + noise
xs = tf.placeholder(tf.float32,[none,1])
ys = tf.placeholder(tf.float32,[none,1])
l1 = add_layer(xs, 1, 10, activation_function=tf.nn
.relu)
prediction = add_layer(l1, 10, 1, activation_function=none)
# loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1])) same as following
loss = tf.reduce_mean(tf.square(ys - prediction))
train_step = tf.train
.gradientdescentoptimizer(0.5).minimize(loss)
init = tf.global_variables_initializer()
sess = tf.session()
sess.run(init)
for i in range(1000):
sess.run(train_step,feed_dict=)
if i%20 == 0:
print(i,sess.run(loss,feed_dict=))
關於tf.reduce_mean,tf.reduce_sum等函式中reduction_indices的用法在這裡
一目了然,當沒有設定該引數時,該引數的預設值是none,將tensor降到0維,也就是乙個數
簡單搭建神經網路
簡單的神經網路 準備,前傳,後傳,迭代 下面是乙個簡單的神經網路搭建的 coding utf 8 import tensorflow as tf import numpy as np batch size 8 seed 23455 基於seed產生隨機數 rng np.random.randomst...
簡單神經網路的搭建
coding utf 8 created on wed mar 14 09 50 13 2018 author 102121 from tensorflow.examples.tutorials.mnist import input data import tensorflow as tf 匯入mn...
神經網路的簡單搭建
bp神經網路是一種按誤差逆傳播演算法訓練的多層前饋網路,是目前應用最廣泛的神經網路模型之一。bp網路能學習和存貯大量的 輸入 輸出模式對映關係,而無需事前揭示描述這種對映關係的數學方程。它的學習規則是使用最速下降法,通過反向傳播來不斷 調整網路的權值和閾值,使網路的誤差平方和最小。通俗一點講就是一種...