深度自編碼器的原理上一節已經講過,這次我們來看一下它的python**實現,這是基於mnist的自編碼實現。
from __future__ import division, print_function, absolute_importimport tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist_data", one_hot=true)
learning_rate = 0.01
training_epochs = 20
batch_size = 256
display_step = 1
examples_to_show = 10
n_hidden_1 = 256
n_hidden_2 = 128
n_input = 784
x = tf.placeholder("float", [none, n_input])
weights =
biases =
def encoder(x):
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
biases['encoder_b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
biases['encoder_b2']))
return layer_2
def decoder(x):
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
biases['decoder_b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
biases['decoder_b2']))
return layer_2
encoder_op = encoder(x)
decoder_op = decoder(encoder_op)
y_pred = decoder_op
y_true = x
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
optimizer = tf.train.rmspropoptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.session() as sess:
sess.run(init)
total_batch = int(mnist.train.num_examples/batch_size)
for epoch in
range(training_epochs):
for i in
range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict=)
if epoch % display_step == 0:
print("epoch:", '%04d' % (epoch+1),
"cost=", "".format(c))
print("optimization finished!")
encode_decode = sess.run(
y_pred, feed_dict=)
f, a = plt.subplots(2, 10, figsize=(10, 2))
for i in
range(examples_to_show):
a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))
a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
f.show()
plt.draw()
plt.waitforbuttonpress()
經過20次的迭代,我們的輸出結果cost明顯的減小了
epoch: 0001 cost= 0.196800113
epoch: 0002 cost= 0.169325382
epoch: 0003 cost= 0.155912638
epoch: 0004 cost= 0.148683071
epoch: 0005 cost= 0.142708376
epoch: 0006 cost= 0.136180028
epoch: 0007 cost= 0.130748138
epoch: 0008 cost= 0.125925466
epoch: 0009 cost= 0.122442275
epoch: 0010 cost= 0.117254384
epoch: 0011 cost= 0.114797853
epoch: 0012 cost= 0.112438530
epoch: 0013 cost= 0.109801762
epoch: 0014 cost= 0.107820347
epoch: 0015 cost= 0.105974235
epoch: 0016 cost= 0.105912112
epoch: 0017 cost= 0.104165390
epoch: 0018 cost= 0.100365378
epoch: 0019 cost= 0.100399643
epoch: 0020 cost= 0.099709332
optimization finished!
通過觀察最後的影象特徵,經過自編碼後的與原始的輸入非常的相似,只是多了一些雜訊在原始的上。cost的值也降到了0.1一下,cost值
還可以通過調節引數來讓其繼續減小。
python實現自編碼器autoencode
coding utf 8 created on sun sep 3 13 48 19 2017 author piaodexin from future import division,print function,absolute import import tensorflow as tf fr...
自編碼器(AutoEncoder
本文講述自編碼器 auto encoder,下文簡稱ae 將按照以下的思路展開講解,力圖使得初學者能看懂其中要義。目錄如下 1.ae的基本原理 2.ae的用途 3.基於mnist資料集的ae的簡單python實現 ae,是神經網路模型的一種,是一種全連線網路模型,而且進行無監督學習,下圖是簡單的ae...
自編碼器簡介
autoencoder,中文譯名自編碼器。主要用於特徵提取,可以理解為一種基於神經網路的壓縮 降維演算法,和pca等類似。自編碼器為一種有損壓縮演算法,通過壓縮 編碼 獲得資料等抽象特徵,並可以通過解壓縮 解碼 將抽象特徵盡可能地還原成原始資料。因此,根據原ppt中對自編碼器 學習使用者高度抽象的特...