新的head
新增新的學習策略(learning rate scheduler (updater))
參考 copyofsgd
例子:mmaction/core/optimizer/my_optimizer.py
from
.registry import optimizers
from torch.optim import optimizer
@optimizers.register_module(
)class
myoptimizer
(optimizer)
:def
__init__
(self, a, b, c)
:
模組的__init__.py
中寫入from .my_optimizer import myoptimizer
,這樣 registry will find the new module and add it:
配置檔案中寫:
optimizer = dict(type='sgd', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer = dict(type='adam', lr=0.0003, weight_decay=0.0001)
optimizer = dict(type='myoptimizer', a=a_value, b=b_value, c=c_value)
有些 fine-grained parameter,比如只想在bn層weight decay,需要寫optimizer constructor
,繼承自defaultoptimizerconstructor, 重寫add_params(self, params, module)
method.
tsm的優化器構造例子:
tsmoptimizerconstructor
自定義的話:mmaction/core/optimizer/my_optimizer_constructor.py
from mmcv.runner import optimizer_builders, defaultoptimizerconstructor
@optimizer_builders.register_module()
class myoptimizerconstructor(defaultoptimizerconstructor):
mmaction/core/optimizer/__init__.py
中加入from .my_optimizer_constructor import myoptimizerconstructor
配置中加入
# optimizer
optimizer = dict(
type='sgd',
constructor='myoptimizerconstructor',
paramwise_cfg=dict(fc_lr5=true),
lr=0.02,
momentum=0.9,
weight_decay=0.0001)
新的backbone
新建mmaction/models/backbones/resnet.py
import torch.nn as nn
from ..registry import backbones
@backbones.register_module()
class resnet(nn.module):
def __init__(self, arg1, arg2):
pass
def forward(self, x): # should return a tuple
pass
def init_weights(self, pretrained=none):
pass
mmaction/models/backbones/__init__.py
加入from .resnet import resnet
配置中寫入
model = dict(
...backbone=dict(
type='resnet',
arg1=***,
arg2=***),
)
新的head
建立mmaction/models/heads/tsn_head.py
,繼承basehead 重寫init_weights(self)
和forward(self, x)
from
..registry import heads
from
.base import basehead
@heads.register_module(
)class
tsnhead
(basehead)
:def
__init__
(self, arg1, arg2)
:pass
defforward
(self, x)
:pass
definit_weights
(self)
:pass
mmaction/models/heads/__init__.py
中加入from .tsn_head import tsnhead
配置中寫入
model = dict(
...cls_head=dict(
type='tsnhead',
num_classes=400,
in_channels=2048,
arg1=***,
arg2=***),
新的損失
mmaction/models/losses/my_loss.py
import torch
import torch.nn as nn
from
..builder import losses
defmy_loss
(pred, target)
:assert pred.size(
)== target.size(
)and target.numel(
)>
0 loss = torch.
abs(pred - target)
return loss
@losses.register_module(
)class
myloss
(nn.module)
:def
forward
(self, pred, target)
: loss = my_loss(pred, target)
return loss
mmaction/models/losses/__init__.py
中加入from .my_loss import myloss, my_loss
配置中寫入loss_bbox=dict(type='myloss'))
lr_config = dict(policy='step', step=[20, 40])
1.檔案$mmaction2/mmaction/core/lr
中寫入lrupdaterhook
, 繼承自mmcv.lrupdaterhook
@hooks.register_module(
)# register it here
class
relativesteplrupdaterhook
(lrupdaterhook)
:# you should inheritate it from mmcv.lrupdaterhook
def__init__
(self, runner, steps, lrs,
**kwargs)
:super()
.__init__(
**kwargs)
assert
len(steps)==(
len(lrs)
) self.steps = steps
self.lrs = lrs
defget_lr
(self, runner, base_lr)
:# only this function is required to override
# this function is called before each training epoch, return the specific learning rate here.
progress = runner.epoch if self.by_epoch else runner.
iter
for i in
range
(len
(self.steps)):
if progress < self.steps[i]
:return self.lrs[i]
配置中寫入lr_config = dict(policy='relativestep', steps=[20, 40, 60], lrs=[0.1, 0.01, 0.001])
mmaction2 資料相關原始碼概覽
data dict videos per gpu 8,workers per gpu 4,train dict type dataset type,ann file ann file train,data prefix data root,pipeline train pipeline val di...
python 冷知識(裝13 指南)
alpha 1,2,3 beta alpha alpha 的別名 beta 4,5 alpha 和 beta 都是 1,2,3,4,5 beta beta 6,7 此時beta的記憶體位址已經變成了,1,2,3,4,5,6,7 print alpha alpha 還是 1,2,3,4,5 會發生這樣...
佛系程式設計師的月薪五萬指南
個人總結 1 多讀書 2 多擠出碎片時間讀書 3 碎片時間系統學習,即但凡擠出了一點時間,就堅持看書,能看多少是多少,不要怕忘記,有個印象都好,並持之以恆 4 好書讀三遍,第一遍精讀,並畫出重點做好筆記,第二遍掃讀,主要看重點和自己做的筆記,第三遍憶讀,看目錄回想內容 5 讀書一定要讀紙質書,拒絕電...