extorch.vision.paired_transform

BasePairedTransform

Base paired transformation.

PairedCompose

Paired compose specially designed for paired transformations.

PairedRandomIdentity

Randomly replace the image with its corresponding label.

PairedRandomVerticalFlip

Vertically flip the given image and label randomly with a given probability (`Link`_).

PairedRandomHorizontalFlip

Horizontally flip the given image and label randomly with a given probability (`Link`_).

class extorch.vision.paired_transform.BasePairedTransform[source]

Bases: torch.nn.modules.module.Module

Base paired transformation.

For some visual tasks such as image restoration, the parameters of random transformation should be the same on the image and the corresponding label. Therefore, we provide this class for constructing paired-transformations which makes the randomness same.

static check_data(img: Union[torch.Tensor, numpy.ndarray], label: Union[torch.Tensor, numpy.ndarray]) None[source]

Shapes of the input and its corresponding label must be the same.

abstract forward(img: Union[torch.Tensor, numpy.ndarray], label: Union[torch.Tensor, numpy.ndarray]) Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Transform inputs under the same randomness

training: bool
class extorch.vision.paired_transform.PairedCompose(transforms)[source]

Bases: torchvision.transforms.transforms.Compose

Paired compose specially designed for paired transformations.

Paired tranformations are applied on the image and its corresponding label at the same time. Other basic transformations are apllied on the image and its corresponding label respectively.

Examples::
>>> transform = PairedCompose([transforms.ToTensor(), PairedRandomHorizontalFlip(p = 0.5)])
>>> img = np.ones((32, 32, 3))
>>> label = np.zeros((32, 32, 3))
>>> img, label = transform(img, label)
class extorch.vision.paired_transform.PairedRandomHorizontalFlip(p: float = 0.5)[source]

Bases: extorch.vision.paired_transform.BasePairedTransform, torchvision.transforms.transforms.RandomHorizontalFlip

Horizontally flip the given image and label randomly with a given probability (`Link`_). If the image and label are torch Tensor, they are expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

Parameters

p (float) – probability of the image and label being flipped. Default: 0.5.

forward(img: Union[torch.Tensor, numpy.ndarray], label: Union[torch.Tensor, numpy.ndarray]) Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Transform inputs under the same randomness

training: bool
class extorch.vision.paired_transform.PairedRandomIdentity(p: float)[source]

Bases: extorch.vision.paired_transform.BasePairedTransform

Randomly replace the image with its corresponding label.

Parameters

p (float) – probability of the image being replaced. Default: 0.5.

forward(img: Union[torch.Tensor, numpy.ndarray], label: Union[torch.Tensor, numpy.ndarray]) Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Transform inputs under the same randomness

training: bool
class extorch.vision.paired_transform.PairedRandomVerticalFlip(p: float = 0.5)[source]

Bases: extorch.vision.paired_transform.BasePairedTransform, torchvision.transforms.transforms.RandomVerticalFlip

Vertically flip the given image and label randomly with a given probability (`Link`_). If the image and label are torch Tensor, they are expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

Parameters

p (float) – probability of the image and label being flipped. Default: 0.5.

forward(img: Union[torch.Tensor, numpy.ndarray], label: Union[torch.Tensor, numpy.ndarray]) Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Transform inputs under the same randomness

training: bool