Cyclic#

class vision_architectures.schedulers.cyclic.SineScheduler(start_value, max_value, decay=0.0, wavelength=None)[source]#

Bases: object

__init__(start_value, max_value, decay=0.0, wavelength=None)[source]#
set_wavelength(wavelength)[source]#
is_ready()[source]#
get()[source]#
step()[source]#
class vision_architectures.schedulers.cyclic.SineLR(optimizer, start_lr, max_lr, wavelength, decay, last_epoch=-1, verbose='deprecated')[source]#

Bases: LRScheduler

__init__(optimizer, start_lr, max_lr, wavelength, decay, last_epoch=-1, verbose='deprecated')[source]#
get_lr()[source]#

Compute the next learning rate for each of the optimizer’s param_groups.

Returns:

A list of learning rates for each of the optimizer’s param_groups with the same types as their current group["lr"]s.

Return type:

list[float | Tensor]

Note

If you’re trying to inspect the most recent learning rate, use get_last_lr() instead.

Note

The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.

step(epoch=None)[source]#

Step the scheduler.

Parameters:

epoch (int, optional) –

Deprecated since version 1.4: If provided, sets last_epoch to epoch and uses _get_closed_form_lr() if it is available. This is not universally supported. Use step() without arguments instead.

Note

Call this method after calling the optimizer’s step().

class vision_architectures.schedulers.cyclic.Phase(value)[source]#

Bases: Enum

An enumeration.

UP = 1#
TOP = 2#
DOWN = 3#
BOTTOM = 4#
class vision_architectures.schedulers.cyclic.CyclicAnnealingScheduler(start_value, max_value, up_annealing_steps=None, top_fixed_steps=None, down_annealing_steps=None, bottom_fixed_steps=None)[source]#

Bases: object

Cyclic Annealing Schedule, inspired by the paper Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing.

NOT_READY_ERROR_MSG = 'Number of steps for each phase must be set before using the scheduler'#
__init__(start_value, max_value, up_annealing_steps=None, top_fixed_steps=None, down_annealing_steps=None, bottom_fixed_steps=None)[source]#
set_num_annealing_steps(up_annealing_steps=None, top_annealing_steps=None, down_annealing_steps=None, bottom_annealing_steps=None)[source]#
set_next_phase()[source]#
is_ready()[source]#
get()[source]#
step()[source]#
class vision_architectures.schedulers.cyclic.CyclicAnnealingLR(optimizer, start_lr, max_lr, up_annealing_steps, top_fixed_steps, down_annealing_steps, bottom_fixed_steps, last_epoch=-1, verbose='deprecated')[source]#

Bases: LRScheduler

__init__(optimizer, start_lr, max_lr, up_annealing_steps, top_fixed_steps, down_annealing_steps, bottom_fixed_steps, last_epoch=-1, verbose='deprecated')[source]#
get_lr()[source]#

Compute the next learning rate for each of the optimizer’s param_groups.

Returns:

A list of learning rates for each of the optimizer’s param_groups with the same types as their current group["lr"]s.

Return type:

list[float | Tensor]

Note

If you’re trying to inspect the most recent learning rate, use get_last_lr() instead.

Note

The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.

step(epoch=None)[source]#

Step the scheduler.

Parameters:

epoch (int, optional) –

Deprecated since version 1.4: If provided, sets last_epoch to epoch and uses _get_closed_form_lr() if it is available. This is not universally supported. Use step() without arguments instead.

Note

Call this method after calling the optimizer’s step().