PyTorch 使用经验积累
本文主要记录了 PyTorch 的使用经验,包括 PyTorch 的基本使用、PyTorch 的高级使用、PyTorch 的实现原理等。
Apply LSTM Using PyTorch
PyTorch LSTM: The Definitive Guide | cnvrg.io
Convolution Implementation in PyTorch
Fourier Convolutions in PyTorch
Pytorch 不同参数设置不同的学习率
实现方法一:使用
param_groups
参数1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21import torch.optim as optim
# Define your model
model = ...
# Create parameter groups
params = [
{'params': model.layer1.parameters(), 'lr': 0.01},
{'params': model.layer2.parameters(), 'lr': 0.001},
{'params': model.bias.parameters(), 'lr': 0.005} # All biases with same lr
]
# Create optimizer with parameter groups
optimizer = optim.Adam(params)
# train model
for epoch in range(_epochs):
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()1
2
3
4
5
6
7for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
model.parameters_group.grad *= 100 # scale the gradient on the model
optimizer.step()实现方法二:使用
torch.optim.lr_scheduler
模块1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
model = TheModelClass(*args, **kwargs)
optimizer = optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
for epoch in range(100):
scheduler.step()
# train model
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
PyTorch 使用经验积累