WarmUpLR¶
-
class
torch.optim.lr_scheduler.
WarmUpLR
(optimizer, warmup_factor=0.3333333333333333, warmup_iters=5, warmup_method='linear', last_epoch=- 1, verbose=False)[source]¶ Decays the learning rate of each parameter group by either a small constant or linearly increasing small warmup factor until the number of epoch reaches a pre-defined milestone: warmup_iters. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.
- Parameters
optimizer (Optimizer) – Wrapped optimizer.
warmup_factor (float) – The number we multiply learning rate in the first epoch. If the warming up method is constant, the multiplication factor of the learning rate stays the same in all epochs, but, in the linear case, it starts increasing in the following epochs. Default: 1./3.
warmup_iters (int) – The number of warming up steps. Default: 5.
warmup_method (str) – One of constant and linear. In constant mode, the learning rate will be multiplied with a small constant until a milestone defined in warmup_iters. In the linear case, the multiplication factor starts with warmup_factor in the first epoch then linearly increases to reach 1. in the epoch number warmup_iters. Default: linear.
last_epoch (int) – The index of the last epoch. Default: -1.
verbose (bool) – If
True
, prints a message to stdout for each update. Default:False
.
Example
>>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.025 if epoch == 0 >>> # lr = 0.03125 if epoch == 1 >>> # lr = 0.0375 if epoch == 2 >>> # lr = 0.04375 if epoch == 3 >>> # lr = 0.005 if epoch >= 4 >>> scheduler = WarmUpLR(self.opt, warmup_factor=0.5, warmup_iters=4, warmup_method="linear") >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
-
get_last_lr
()¶ Return last computed learning rate by current scheduler.
-
load_state_dict
(state_dict)¶ Loads the schedulers state.
- Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
-
print_lr
(is_verbose, group, lr, epoch=None)¶ Display the current learning rate.