Pipeline Parallelism#
- vision_architectures.utils.pipeline_parallelism.get_device(device)[source]#
Convert to torch.device object.
- Return type:
device
- vision_architectures.utils.pipeline_parallelism.move_to_device(data, device)[source]#
Move data to the specified device.
- Parameters:
data – The data to move.
device – The device to move the data to.
- Returns:
The data moved to the specified device.
- class vision_architectures.utils.pipeline_parallelism.PipelineModule(module, processing_device, output_device=None)[source]#
Bases:
Module
- __init__(module, processing_device, output_device=None)[source]#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(*args, **kwargs)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- vision_architectures.utils.pipeline_parallelism.paralellize_pipeline(model, module_to_device)[source]#
Parallelize a model across multiple devices.
- Parameters:
model (
Module
) – The model to parallelize.module_to_device (
dict
[str
,device
|str
|list
[device
|str
]]) – A dictionary mapping module names to devices. Keys are modules names with nested modules separated by dots (e.g., “module.submodule”). Note that the parallelism is performed using Level Order Traversal (i.e. BFS) of the model. Therefore the device of the deepest module in the dictionary will be overwritten even if it’s parent is also specified in the dictionary. Value is either a device or a 2-tuple of devices. The first device is the processing device, and the second device is the output device.
- Return type:
Module
- Returns:
The parallelized pipeline.