省略了資料集的處理過程
#轉為tensor
x = torch.tensor(input_features, dtype =
float
)y = torch.tensor(labels, dtype =
float
)# 權重引數初始化,設計網路結構 輸入348*14
weights = torch.randn((14
,128
), dtype =
float
, requires_grad =
true
)#w1 14個變為128個
biases = torch.randn(
128, dtype =
float
, requires_grad =
true
) weights2 = torch.randn(
(128,1
), dtype =
float
, requires_grad =
true
)#結果是1個值
biases2 = torch.randn(
1, dtype =
float
, requires_grad =
true
) learning_rate =
0.001
losses =
for i in
range
(1000):
# 計算隱層
hidden = x.mm(weights)
+ biases #mm 矩陣乘法
# 加入啟用函式
hidden = torch.relu(hidden)
# **結果
predictions = hidden.mm(weights2)
+ biases2
# 通計算損失
loss = torch.mean(
(predictions - y)**2
)#均方誤差))
# 列印損失值
if i %
100==0:
print
('loss:'
, loss)
#返向傳播計算
loss.backward(
)#更新引數
weights.data.add_(
- learning_rate * weights.grad.data)
biases.data.add_(
- learning_rate * biases.grad.data)
weights2.data.add_(
- learning_rate * weights2.grad.data)
biases2.data.add_(
- learning_rate * biases2.grad.data)
# 每次迭代都得記得清空
weights.grad.data.zero_(
) biases.grad.data.zero_(
) weights2.grad.data.zero_(
) biases2.grad.data.zero_(
)
loss: tensor(8347.9924, dtype=torch.float64, grad_fn=)
loss: tensor(152.3170, dtype=torch.float64, grad_fn=)
loss: tensor(145.9625, dtype=torch.float64, grad_fn=)
loss: tensor(143.9453, dtype=torch.float64, grad_fn=)
loss: tensor(142.8161, dtype=torch.float64, grad_fn=)
loss: tensor(142.0664, dtype=torch.float64, grad_fn=)
loss: tensor(141.5386, dtype=torch.float64, grad_fn=)
loss: tensor(141.1528, dtype=torch.float64, grad_fn=)
loss: tensor(140.8618, dtype=torch.float64, grad_fn=)
loss: tensor(140.6318, dtype=torch.float64, grad_fn=)
input_size = input_features.shape[1]
hidden_size =
128output_size =
1batch_size =
16my_nn = torch.nn.sequential(
torch.nn.linear(input_size, hidden_size)
, torch.nn.sigmoid(),
torch.nn.linear(hidden_size, output_size),)
cost = torch.nn.mseloss(reduction=
'mean'
)optimizer = torch.optim.adam(my_nn.parameters(
), lr =
0.001
)
# 訓練網路
losses =
for i in
range
(1000):
batch_loss =
# mini-batch方法來進行訓練
for start in
range(0
,len
(input_features)
, batch_size)
: end = start + batch_size if start + batch_size <
len(input_features)
else
len(input_features)
xx = torch.tensor(input_features[start:end]
, dtype = torch.
float
, requires_grad =
true
) yy = torch.tensor(labels[start:end]
, dtype = torch.
float
, requires_grad =
true
) prediction = my_nn(xx)
loss = cost(prediction, yy)
optimizer.zero_grad(
) loss.backward(retain_graph=
true
) optimizer.step())
)# 列印損失
if i %
100==0:
)print
(i, np.mean(batch_loss)
)
0 3950.7627
100 37.9201
200 35.654438
300 35.278366
400 35.116814
500 34.986076
600 34.868954
700 34.75414
800 34.637356
900 34.516705
x = torch.tensor(input_features, dtype = torch.
float
)predict = my_nn(x)
.data.numpy(
)
Pytorch入門操作 構建簡單的神經網路
讀研伊始,簡單了解了一下pytorch構建神經網路的方法,特記錄。1 利用pytorch實現神經網路時,要注意 numpy和tensor的互化 資料型別的轉化,包括numpy的int32 float64等,注意numpy中astype 的用法。注意conv接收的資料為4維 batch,channel...
用pytorch構建神經網路
在神經網路模型中之前,要對資料進行一系列的預處理,如果是型別變數,可使用one hot編碼 對於數值型別,可進行標準化,讓其屬性值在0左右波動 import torch import numpy as np import pandas as pd from torch.autograd import...
使用pytorch構建神經網路
介紹 從學習神經網路到現在時間也不短了,由於個人數學能力有限,用numpy構建神經網路,實屬力不從心,但還是將神經網路的基本步驟理清了,然後開始學習用pytorch搭建神經網路。以下記錄構建神經網路的簡單方法。import numpy as np import torch n,d in,h,d ou...