Cyclic#
- class vision_architectures.schedulers.cyclic.SineScheduler(start_value, max_value, decay=0.0, wavelength=None)[source]#
Bases:
object
- class vision_architectures.schedulers.cyclic.SineLR(optimizer, start_lr, max_lr, wavelength, decay, last_epoch=-1, verbose='deprecated')[source]#
Bases:
LRScheduler- __init__(optimizer, start_lr, max_lr, wavelength, decay, last_epoch=-1, verbose='deprecated')[source]#
- get_lr()[source]#
Compute the next learning rate for each of the optimizer’s
param_groups.- Returns:
A
listof learning rates for each of the optimizer’sparam_groupswith the same types as their currentgroup["lr"]s.- Return type:
list[float | Tensor]
Note
If you’re trying to inspect the most recent learning rate, use
get_last_lr()instead.Note
The returned
Tensors are copies, and never alias the optimizer’sgroup["lr"]s.
- step(epoch=None)[source]#
Step the scheduler.
- Parameters:
epoch (int, optional) –
Deprecated since version 1.4: If provided, sets
last_epochtoepochand uses_get_closed_form_lr()if it is available. This is not universally supported. Usestep()without arguments instead.
Note
Call this method after calling the optimizer’s
step().
- class vision_architectures.schedulers.cyclic.Phase(value)[source]#
Bases:
EnumAn enumeration.
- UP = 1#
- TOP = 2#
- DOWN = 3#
- BOTTOM = 4#
- class vision_architectures.schedulers.cyclic.CyclicAnnealingScheduler(start_value, max_value, up_annealing_steps=None, top_fixed_steps=None, down_annealing_steps=None, bottom_fixed_steps=None)[source]#
Bases:
objectCyclic Annealing Schedule, inspired by the paper Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing.
- NOT_READY_ERROR_MSG = 'Number of steps for each phase must be set before using the scheduler'#
- __init__(start_value, max_value, up_annealing_steps=None, top_fixed_steps=None, down_annealing_steps=None, bottom_fixed_steps=None)[source]#
- class vision_architectures.schedulers.cyclic.CyclicAnnealingLR(optimizer, start_lr, max_lr, up_annealing_steps, top_fixed_steps, down_annealing_steps, bottom_fixed_steps, last_epoch=-1, verbose='deprecated')[source]#
Bases:
LRScheduler- __init__(optimizer, start_lr, max_lr, up_annealing_steps, top_fixed_steps, down_annealing_steps, bottom_fixed_steps, last_epoch=-1, verbose='deprecated')[source]#
- get_lr()[source]#
Compute the next learning rate for each of the optimizer’s
param_groups.- Returns:
A
listof learning rates for each of the optimizer’sparam_groupswith the same types as their currentgroup["lr"]s.- Return type:
list[float | Tensor]
Note
If you’re trying to inspect the most recent learning rate, use
get_last_lr()instead.Note
The returned
Tensors are copies, and never alias the optimizer’sgroup["lr"]s.
- step(epoch=None)[source]#
Step the scheduler.
- Parameters:
epoch (int, optional) –
Deprecated since version 1.4: If provided, sets
last_epochtoepochand uses_get_closed_form_lr()if it is available. This is not universally supported. Usestep()without arguments instead.
Note
Call this method after calling the optimizer’s
step().