一文裡說了怎麼寫自定義的模型.本篇說怎麼自定義層.
分兩種:
核心都一樣,自定義乙個繼承自nn.module的類
,在類的forward函式裡實現該layer的計算,不同的是,帶引數的layer需要用到nn.parameter
直接繼承nn.module
import torch
from torch import nn
class centeredlayer(nn.module):
def __init__(self, **kwargs):
super(centeredlayer, self).__init__(**kwargs)
def forward(self, x):
return x - x.mean()
layer = centeredlayer()
layer(torch.tensor([1, 2, 3, 4, 5], dtype=torch.float))
net = nn.sequential(nn.linear(8, 128), centeredlayer())
y = net(torch.rand(4, 8))
y.mean().item()
parameter
類其實是tensor
的子類,如果乙個tensor
是parameter
,那麼它會自動被新增到模型的引數列表裡。所以在自定義含模型引數的層時,我們應該將引數定義成parameter
,除了直接定義成parameter
類外,還可以使用parameterlist
和parameterdict
分別定義引數的列表和字典。
parameterlist用法和list類似
輸出class mydense(nn.module):
def __init__(self):
super(mydense,self).__init__()
self.params = nn.parameterlist([nn.parameter(torch.randn(4,4)) for i in range(4)])
def forward(self,x):
for i in range(len(self.params)):
x = torch.mm(x,self.params[i])
return x
net = mydense()
print(net)
parameterdict用法和python dict類似.也可以用.keys(),.items()mydense(
(params): parameterlist(
(0): parameter containing: [torch.floattensor of size 4x4]
(1): parameter containing: [torch.floattensor of size 4x4]
(2): parameter containing: [torch.floattensor of size 4x4]
(3): parameter containing: [torch.floattensor of size 4x4]
(4): parameter containing: [torch.floattensor of size 4x1]
))
輸出class mydictdense(nn.module):
def __init__(self):
super(mydictdense, self).__init__()
self.params = nn.parameterdict()
self.params.update() # 新增
def forward(self, x, choice='linear1'):
return torch.mm(x, self.params[choice])
net = mydictdense()
print(net)
print(net.params.keys(),net.params.items())
x = torch.ones(1, 4)
net(x, 'linear1')
使用自定義的layer構造模型mydictdense(
(params): parameterdict(
(linear1): parameter containing: [torch.floattensor of size 4x4]
(linear2): parameter containing: [torch.floattensor of size 4x1]
(linear3): parameter containing: [torch.floattensor of size 4x2]
))odict_keys(['linear1', 'linear2', 'linear3']) odict_items([('linear1', parameter containing:
tensor([[-0.2275, -1.0434, -1.6733, -1.8101],
[ 1.7530, 0.0729, -0.2314, -1.9430],
[-0.1399, 0.7093, -0.4628, -0.2244],
[-1.6363, 1.2004, 1.4415, -0.1364]], requires_grad=true)), ('linear2', parameter containing:
tensor([[ 0.5035],
[-0.0171],
[-0.8580],
[-1.1064]], requires_grad=true)), ('linear3', parameter containing:
tensor([[-1.2078, 0.4364],
[-0.8203, 1.7443],
[-1.7759, 2.1744],
[-0.8799, -0.1479]], requires_grad=true))])
輸出layer1 = mydense()
layer2 = mydictdense()
net = nn.sequential(layer2,layer1)
print(net)
print(net(x))
sequential(
(0): mydictdense(
(params): parameterdict(
(linear1): parameter containing: [torch.floattensor of size 4x4]
(linear2): parameter containing: [torch.floattensor of size 4x1]
(linear3): parameter containing: [torch.floattensor of size 4x2]
)) (1): mydense(
(params): parameterlist(
(0): parameter containing: [torch.floattensor of size 4x4]
(1): parameter containing: [torch.floattensor of size 4x4]
(2): parameter containing: [torch.floattensor of size 4x4]
(3): parameter containing: [torch.floattensor of size 4x4]
(4): parameter containing: [torch.floattensor of size 4x1])))
tensor([[-4.7566]], grad_fn=)
Pytorch自定義引數
如果想要靈活地使用模型,可能需要自定義引數,比如 class net nn.module def init self super net,self init self.a torch.randn 2 3 requires grad true self.b nn.linear 2,2 defforwa...
PyTorch 自定義層
與使用module類構造模型類似。下面的centeredlayer類通過繼承module類自定義了乙個將輸入減掉均值後輸出的層,並將層的計算定義在了forward函式裡。這個層裡不含模型引數。class mydense nn.module def init self super mydense,se...
Android自定義從頭學起
在android開發中,想要實現絢麗的效果,僅僅依賴於系統提供的控制項是無法滿足的,於是便產生了自定義view。自定義view主要分為四類,第一種是通過繼承view,然後重寫view的ondraw方法,這種方式主要用於一些不規則的效果,第二種是繼承viewgroup派生特定的layout,這種方式主...