**:
a matlab toolbox for deep learning.
nn/
- a library for feedforward backpropagation neural networks
cnn/
- a library for convolutional neural networks
dbn/
- a library for deep belief networks
sae/
- a library for stacked auto-encoders
cae/
- a library for convolutional auto-encoders
util/
- utility functions used by the libraries
data/
- data used by the examples
tests/
- unit tests to verify toolbox is working
for references on each library check refs.md
download.
addpath(genpath('deeplearntoolbox'));
windows下把資料夾加入 path 即可
[plain]view plain
copy
%lifeiteng
path = pwd;
files = dir(path);
for i = 1:length(files)
if files(i).isdir
file = files(i).name;
addpath([path '/' file])
disp(['add ' file ' to path!'])
end
end
我不打算解析**,想從**裡面學演算法是stupid的;有相應的**,readlist,talk等可以去學習。
deeplearntoolbox單隱藏層nn的優化策略:mini-batch sgd
[plain]view plain
copy
function [nn, l] = nntrain(nn, train_x, train_y, opts, val_x, val_y)
%nntrain trains a neural net
% [nn, l] = nnff(nn, x, y, opts) trains the neural network nn with input x and
% output y for opts.numepochs epochs, with minibatches of size
% opts.batchsize. returns a neural network nn with updated activations,
% errors, weights and biases, (nn.a, nn.e, nn.w, nn.b) and l, the sum
% squared error for each training minibatch.
assert(isfloat(train_x), 'train_x must be a float');
assert(nargin == 4 || nargin == 6,'number ofinput arguments must be 4 or 6')
loss.train.e = ;
loss.train.e_frac = ;
loss.val.e = ;
loss.val.e_frac = ;
opts.validation = 0;
if nargin == 6
opts.validation = 1;
end
fhandle = ;
if isfield(opts,'plot') && opts.plot == 1
fhandle = figure();
end
m = size(train_x, 1);
batchsize = opts.batchsize;
numepochs = opts.numepochs;
numbatches = m / batchsize;
assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
l = zeros(numepochs*numbatches,1);
n = 1;
for i = 1 : numepochs
tic;
kk = randperm(m);
for l = 1 : numbatches
batch_x = train_x(kk((l - 1) * batchsize + 1 : l * batchsize), :);
%add noise to input (for use in denoising autoencoder)
if(nn.inputzeromaskedfraction ~= 0)
batch_x = batch_x.*(rand(size(batch_x))>nn.inputzeromaskedfraction);
end
batch_y = train_y(kk((l - 1) * batchsize + 1 : l * batchsize), :);
nn = nnff(nn, batch_x, batch_y);
nn = nnbp(nn);
l(n) = nn.l;
n = n + 1;
end
t = toc;
if ishandle(fhandle)
if opts.validation == 1
loss = nneval(nn, loss, train_x, train_y, val_x, val_y);
else
loss = nneval(nn, loss, train_x, train_y);
end
nnupdatefigures(nn, fhandle, loss, opts, i);
end
disp(['epoch ' num2str(i) '/' num2str(opts.numepochs) '. took ' num2str(t) ' seconds' '. mean squared error on training set is ' num2str(mean(l((n-numbatches):(n-1))))]);
nn.learningrate = nn.learningrate * nn.scaling_learningrate;
end
end
1.不管是在 nntrain、
的走勢——微小抖動在sgd中 算是正常
多數還都是在下降(epoch我一般設為 10-40,這個值可能偏小;hinton 06 science的文章**記得epoch了200次,我跑了3天也沒跑完)
在sae/cnn等中 也沒看到收斂性的判斷。
2.cae 沒有完成
3.dropout的優化策略也可以選擇
我測試了 sae cnn等,多幾次epoch(20-30),在mnist上正確率在 97%+的樣子。
其實cost-function 可以有不同的選擇,如果使用 ufldl的優化方式(固定的優化方法,傳入cost-function的函式控制代碼),在更改cost-function上會更自由。
可以改進的地方:
1. mini-bathch sgd演算法 增加收斂性判斷
2.增加 l-bfgs/cg等優化演算法
3.完善cae等
4.增加min kl-熵的 sparse autoencoder等
5.優化演算法增加對 不同cost-function的支援
VMWare Workstation使用總結幾則
1 安裝 使用ghost盤安裝時一定要注意,需要把空盤建立分割槽並設定為主分割槽 pq的使用形式,進入pq找到磁碟設定為啟用 否則 啟動後顯示boot from network intel e1000 有時裝機忘了,只能從頭再來 安裝64位的虛擬機器時,需要把bios中virtual technol...
VMWare Workstation使用總結幾則
1 安裝 使用ghost盤安裝時一定要注意,需要把空盤建立分割槽並設定為主分割槽 pq的使用形式,進入pq找到磁碟設定為啟用 否則 啟動後顯示boot from network intel e1000 有時裝機忘了,只能從頭再來 安裝64位的虛擬機器時,需要把bios中virtual technol...
pytest parametrize 使用總結
pytest中裝飾器 pytest.mark.parametrize 引數名 list 可以實現測試用例引數化,類似ddt。如 pytest.mark.parametrize 請求方式,介面位址,傳參,預期結果 get www.baidu.com post www.baidu.com pytest....