Welcome to Extorch
Extorch is an useful extension library of PyTorch.
The source code and some minimal working examples can be found on GitHub.
extorch.vision
extorch.vision.dataset
Base dataset. |
|
Base dataset for computer vision tasks. |
|
Base dataset for computer vision classification tasks. |
|
Base dataset for computer vision segmentation tasks. |
|
Tiny ImageNet dataset. |
|
- class extorch.vision.dataset.BaseDataset(data_dir: str)[source]
Base dataset.
- Parameters
data_dir (str) – Base path of the data.
- class extorch.vision.dataset.CVDataset(data_dir: str, train_transform: torchvision.transforms.transforms.Compose, test_transform: torchvision.transforms.transforms.Compose)[source]
Base dataset for computer vision tasks.
- Parameters
data_dir (str) – Base path of the data.
train_transform (transforms.Compose) – Data transform of the training split.
test_transform (transforms.Compose) – Data transform of the test split.
- class extorch.vision.dataset.CVClassificationDataset(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
Base dataset for computer vision classification tasks.
- Parameters
data_dir (str) – Base path of the data.
train_transform (Optional[transforms.Compose]) – Data transform of the training split. Default:
None
.test_transform (Optional[transforms.Compose]) – Data transform of the test split. Default:
None
.cutout_length (Optional[int]) – The length (in pixels) of each square patch in Cutout. If train transform is not specified and cutout_length is not None, we will add Cutout transform at the end. Default:
None
.cutout_n_holes (Optional[int]) – Number of patches to cut out of each image. Default: 1.
- class extorch.vision.dataset.SegmentationDataset(data_dir: str, train_transform: torchvision.transforms.transforms.Compose, test_transform: torchvision.transforms.transforms.Compose)[source]
Base dataset for computer vision segmentation tasks.
- Parameters
data_dir (str) – Base path of the data.
train_transform (Optional[transforms.Compose]) – Data transform of the training split. Default:
None
.test_transform (Optional[transforms.Compose]) – Data transform of the test split. Default:
None
.
- class extorch.vision.dataset.CIFAR10(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
- class extorch.vision.dataset.CIFAR100(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
- class extorch.vision.dataset.MNIST(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
- class extorch.vision.dataset.FashionMNIST(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
- class extorch.vision.dataset.SVHN(data_dir: str, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
- class extorch.vision.dataset.TinyImageNet(data_dir: str, color_jitter: bool = False, train_crop_size: int = 64, test_crop_size: int = 64, train_transform: Optional[torchvision.transforms.transforms.Compose] = None, test_transform: Optional[torchvision.transforms.transforms.Compose] = None, cutout_length: Optional[int] = None, cutout_n_holes: Optional[int] = 1)[source]
Tiny ImageNet dataset.
- Parameters
color_jitter (bool) – Whether add color_jitter into default train transform. Default:
False
.train_crop_size (int) – Default: 64.
test_crop_size (int) – Default: 64.
- class extorch.vision.dataset.VOCSegmentationDataset(data_dir: str, year: str = '2012', train_transform: Optional[extorch.vision.transforms.segmentation.SegCompose] = None, test_transform: Optional[extorch.vision.transforms.segmentation.SegCompose] = None)[source]
extorch.vision.transforms
extorch.vision.transforms.functional
- extorch.vision.transforms.functional.cutout(image: torch.Tensor, length: int, n_holes: int = 1) torch.Tensor [source]
Cutout: Randomly mask out one or more patches from an image (Link).
- Parameters
image (Tensor) – Image of size (C, H, W).
length (int) – The length (in pixels) of each square patch.
n_holes (int) – Number of patches to cut out of each image. Default: 1.
- Returns
Image with n_holes of dimension length x length cut out of it.
- Return type
image (Tensor)
- Examples::
>>> image = torch.ones((3, 32, 32)) >>> image = cutout(image, 16, 1)
extorch.vision.transforms.transforms
Adaptively randomly crop images with uncertain sizes for a certain size. |
|
Adaptively center-crop images with uncertain sizes for a certain size. |
|
Cutout: Randomly mask out one or more patches from an image (Link). |
- class extorch.vision.transforms.transforms.AdaptiveCenterCrop(cropped_size: Union[int, Tuple[int, int]])[source]
Bases:
torch.nn.modules.module.Module
Adaptively center-crop images with uncertain sizes for a certain size.
- Parameters
cropped_size (Union[int, Tuple[int, int]]) – The Image size to be cropped out.
- forward(img: torch.Tensor) torch.Tensor [source]
- Parameters
img (Tensor) – The image to be cropped.
- Returns
- The cropped image. For example, if the image has size [H, W] and the
cropped size if [h, w], size of output will be [H - h, W - w].
- Return type
img (Tensor)
- training: bool
- class extorch.vision.transforms.transforms.AdaptiveRandomCrop(cropped_size: Union[int, Tuple[int, int]])[source]
Bases:
torch.nn.modules.module.Module
Adaptively randomly crop images with uncertain sizes for a certain size.
- Parameters
cropped_size (Union[int, Tuple[int, int]]) – The Image size to be cropped out.
- forward(img: torch.Tensor) torch.Tensor [source]
- Parameters
img (Tensor) – The image to be cropped.
- Returns
- The cropped image. For example, if the image has size [H, W] and the
cropped size if [h, w], size of output will be [H - h, W - w].
- Return type
img (Tensor)
- training: bool
- class extorch.vision.transforms.transforms.Cutout(length: int, n_holes: int = 1)[source]
Bases:
torch.nn.modules.module.Module
Cutout: Randomly mask out one or more patches from an image (Link).
- Parameters
length (int) – The length (in pixels) of each square patch.
image (Tensor) – Image of size (C, H, W).
n_holes (int) – Number of patches to cut out of each image. Default: 1.
- Examples::
>>> image = torch.ones((3, 32, 32)) >>> Cutout_transform = Cutout(16, 1) >>> image = Cutout_transform(image) # Shape: [3, 32, 32]
- forward(img: torch.Tensor) torch.Tensor [source]
- Parameters
img (Tensor) – Image of size (C, H, W).
- Returns
Image with n_holes of dimension length x length cut out of it.
- Return type
img (Tensor)
- training: bool
extorch.vision.transforms.segmentation
Transform compose for segmentation. |
|
Horizontally flip the given image and label randomly with a given probability (Link). |
|
Normalization for segmentation, where the normalization is only applied on the input image. |
|
Crops the given image at the center. |
|
Random cropping for segmentation. |
|
Resize for segmentation. |
|
Random resize for segmentation. |
|
PIL to Tensor for segmentation. |
|
Convert image dtype for segmentation. |
- class extorch.vision.transforms.segmentation.SegCenterCrop(size: Union[int, List[int]])[source]
Bases:
torch.nn.modules.module.Module
Crops the given image at the center.
- Parameters
size (Union[int, List[int]]) – Height and width of the crop box.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegCompose(transforms)[source]
Bases:
torchvision.transforms.transforms.Compose
Transform compose for segmentation.
- class extorch.vision.transforms.segmentation.SegConvertImageDtype(dtype)[source]
Bases:
torch.nn.modules.module.Module
Convert image dtype for segmentation.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegNormalize(mean: List[float], std: List[float])[source]
Bases:
torch.nn.modules.module.Module
Normalization for segmentation, where the normalization is only applied on the input image.
- Parameters
mean (List[float]) – List of means for each channel.
std (List[float]) – List of standard deviations for each channel.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegPILToTensor[source]
Bases:
torch.nn.modules.module.Module
PIL to Tensor for segmentation.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegRandomCrop(size: Union[int, Tuple[int, int]])[source]
Bases:
torch.nn.modules.module.Module
Random cropping for segmentation. The Cropping is applied on the image and target at the same time.
- Parameters
size (Union[int, Tuple[int, int]]) – Desired output size of the crop.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegRandomHorizontalFlip(p: float = 0.5)[source]
Bases:
torch.nn.modules.module.Module
Horizontally flip the given image and label randomly with a given probability (Link).
If the image and label are torch Tensor, they are expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions.
- Parameters
p (float) – probability of the image and label being flipped. Default: 0.5.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegRandomResize(min_size: Union[int, Tuple[int, int]], max_size: Optional[Union[T, Tuple[T, T]]] = None)[source]
Bases:
torch.nn.modules.module.Module
Random resize for segmentation.
- Parameters
min_size (Union[int, Tuple[int, int]]) – Desired minimum output size.
max_size (Optional[Union[int, Tuple[int, int]]]) – Desired maximum output size. Default: None.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.vision.transforms.segmentation.SegResize(size: Union[int, Tuple[int, int]])[source]
Bases:
torch.nn.modules.module.Module
Resize for segmentation.
- Parameters
size (Union[int, Tuple[int, int]]) – Desired output size.
- forward(image, target)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
extorch.vision.transforms.detection
Transform compose for detection. |
extorch.utils
extorch.utils.common
- class extorch.utils.common.TimeEstimator(iter_num: int)[source]
Bases:
object
Remaining time estimator. Estimate the remaining time to finish all iterations based on history.
- Parameters
iter_num (int) – Total iteration number.
- step() Tuple[datetime.datetime, datetime.datetime] [source]
Update the estimator and return the estimated remaining time.
- Returns
The remaining time estimated based on the whole history. nearest_time (datetime): The remaining time estimated based on the last iteration.
- Return type
overall_time (datetime)
- extorch.utils.common.makedir(path: str, remove: bool = False, quiet: bool = False) None [source]
Create a leaf directory and all intermediate ones.
- Parameters
path (str) – The directory to be made.
remove (bool) – Sometimes, the targeted directory has existed. If remove is false, the targeted directory will not be removed. Default: False.
quiet (bool) – When the targeted directory has existed and remove == True: If quiet == False, we will ask whether to remove the existing directory. Else, the existing directory will be removed directly. Default: False.
- extorch.utils.common.remove_dirs(root: str, target: str) int [source]
Delete the target dirs under the given root recursively.
- Parameters
root (str) – Root.
target (str) – Name of the target dirs.
- Returns
Number of the deleted dirs.
- Return type
delete_num (int)
- Examples::
>>> # Remove all dirs named "__pycache__" under the current path recursively >>> delete_num = remove_dirs("./", "__pycache__")
- extorch.utils.common.remove_files(root: str, target: str) int [source]
Delete the target files under the given root recursively.
- Parameters
root (str) – Root.
target (str) – Name of the target files.
- Returns
Number of the deleted files.
- Return type
delete_num (int)
- Examples::
>>> # Remove files named "__init__.py" under the current path recursively >>> delete_num = remove_files("./", "__init__.py")
- extorch.utils.common.remove_targets(root: str, target: str) int [source]
Delete the target files and dirs under the given root recursively.
- Parameters
root (str) – Root.
target (str) – Name of the target files or dirs.
- Returns
Number of the deleted files or dirs.
- Return type
delete_num (int)
- Examples::
>>> # Remove dirs or files named "__pycache__" under the current path recursively >>> delete_num = remove_targets("./", "__pycache__")
- extorch.utils.common.run_processes(commands: List[str], **kwargs) List[str] [source]
Run a list of commands one-by-one.
- Parameters
command (List[str]) – A list of commands to be run.
kwargs – Configurations except ‘shell’ for subprocess.check_call. We note that shell must be True for this function.
- Returns
A list of commands that fail to run successfully.
- Return type
unsuccessful_commands (List[str])
- Examples::
>>> # Assumes a runnable python file named `test.py` exists >>> # And assumes no file named `not_exist.py` exists >>> # Now, we want to run `test.py` and `not_exist.py` one-by-one >>> commands = ["python test.py", "python not_exist.py"] >>> unsuccessful_commands = run_processes(commands) # ["python not_exist.py"] >>> print("Unsuccessful commands: {}".format(unsuccessful_commands))
- extorch.utils.common.set_seed(seed: int) None [source]
Set the seed of the system.
- Parameters
seed (int) – The artificial random seed.
- extorch.utils.common.yaml_dump(data, file: str, mode: str = 'w', **kwargs) None [source]
Serialize a Python object into a YAML document.
- Parameters
data – The Python object to be serialized.
file (str) – Path of the YAML document.
mode (str) – Mode to open the document.
kwargs – Other options for opening the document.
- extorch.utils.common.yaml_load(file: str, mode: str = 'r', Loader=<class 'yaml.loader.FullLoader'>, **kwargs)[source]
Parse the YAML document and produce the corresponding Python object.
- Parameters
file (str) – Path of the YAML document.
mode (str) – Mode to open the document.
Loader – Loader for YAML.
kwargs – Other options for opening the document.
extorch.utils.data
extorch.utils.exception
extorch.utils.logger
extorch.utils.stats
- class extorch.utils.stats.SegConfusionMatrix(num_classes: int)[source]
Bases:
object
Confusion matrix as well as metric calculation for image segmentation.
- Parameters
num_classes (int) – The number of classes, including the background.
- extorch.utils.stats.accuracy(outputs: torch.Tensor, targets: torch.Tensor, topk: Tuple[int] = (1,)) List[torch.Tensor] [source]
- extorch.utils.stats.cal_flops(model: torch.nn.modules.module.Module, inputs: torch.Tensor) float [source]
Calculate FLOPs of the given model.
- Parameters
model (nn.Module) – The model whose FLOPs is to be calculated.
inputs (Tensor) – Example inputs to the model.
- Returns
FLOPs of the model.
- Return type
flops (float)
- Examples::
>>> import torch >>> from extorch.nn import AuxiliaryHead >>> module = AuxiliaryHead(3, 10) >>> input = torch.randn((10, 3, 32, 32)) >>> flops = cal_flops(module, input) / 1.e6 # 32.109868
extorch.utils.visual
- extorch.utils.visual.denormalize(image: torch.Tensor, mean: List[float], std: List[float], transpose: bool = False, detach_numpy: bool = False) Union[torch.Tensor, numpy.ndarray] [source]
De-normalize the tensor-like image.
- Parameters
image (Tensor) – The image to be de-normalized with shape [B, C, H, W] or [C, H, W].
mean (List[float]) – Sequence of means for each channel while normalizing the origin image.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin image.
transpose (bool) – Whether transpose the image to [B, H, W, C] or [H, W, C]. Default: False.
detach_numpy (bool) – If true, return Tensor.detach().cpu().numpy().
- Returns
The de-normalized image.
- Return type
image (Union[Tensor, np.ndarray])
Examples
>>> image = torch.randn((5, 3, 32, 32)).cuda() # Shape: [5, 3, 32, 32] (cuda) >>> mean = [0.5, 0.5, 0.5] >>> std = [1., 1., 1.] >>> de_image = denormalize(image, mean, std, True, True) # Shape: [5, 32, 32, 3] (cpu)
- extorch.utils.visual.tsne_fit(feature: numpy.ndarray, n_components: int = 2, init: str = 'pca', **kwargs)[source]
Fit input features into an embedded space and return that transformed output.
- Parameters
feature (np.ndarray) – The features to be embedded.
n_components (int) – Dimension of the embedded space. Default: 2.
init (str) – Initialization of embedding. Possible options are “random”, “pca”, and a numpy array of shape (n_samples, n_components). PCA initialization cannot be used with precomputed distances and is usually more globally stable than random initialization. Default: “pca”.
kwargs – Other configurations for TSNE model construction.
- Returns
The representation in the embedding space.
- Return type
node_pos (np.ndarray)
- Examples::
>>> import numpy as np >>> import matplotlib.pyplot as plt >>> features = np.random.randn(50, 10) >>> labels = np.random.randint(0, 2, (50, 1)) >>> node_pos = tsne_fit(features, 2, "pca") >>> plt.figure() >>> plt.scatter(node_pos[:, 0], node_pos[:, 1], c = labels) >>> plt.show()
extorch.nn
extorch.nn.functional
- extorch.nn.functional.average_logits(logits_list: List[torch.Tensor]) torch.Tensor [source]
Aggregates logits from different networks in the average manner, and returns the aggregated logits. Used for neural ensemble networks.
- Parameters
logits_list (List[Tensor]) – A list of logits from different networks.
- Returns
The aggregated logits (Tensor).
- extorch.nn.functional.dec_soft_assignment(input: torch.Tensor, centers: torch.Tensor, alpha: float) torch.Tensor [source]
Soft assignment used by Deep Embedded Clustering (DEC, `Link`_). Measure the similarity between embedded point and centroid with the Student’s t-distribution.
- Parameters
input (Tensor) – A batch of embedded points. FloatTensor of [batch size, embedding dimension].
centers (Tensor) – The cluster centroids. FloatTensor of [cluster_number, embedding dimension].
alpha (float) – The degrees of freedom of the Student’s tdistribution. Default: 1.0.
- Returns
The similarity between embedded point and centroid. FloatTensor [batch size, cluster_number].
- Return type
similarity (Tensor)
- Examples::
>>> embeddings = torch.ones((2, 10)) >>> centers = torch.zeros((3, 10)) >>> similarity = dec_soft_assignment(embeddings, centers, alpha = 1.)
- extorch.nn.functional.mix_data(input: torch.Tensor, target: torch.Tensor, alpha: float = 1.0)[source]
Mixup data for training neural networks on convex combinations of pairs of examples and their labels (`Link`_).
- Parameters
input (Tensor) – Input examples.
target (Tensor) – Labels of input examples.
alpha (float) – Parameter of the beta distribution. Default: 1.0.
- Returns
Input examples after mixup. mixed_target (Tensor): Labels of mixed inputs. _lambda (float): Parameter sampled from the beta distribution.
- Return type
mixed_input (Tensor)
- extorch.nn.functional.soft_voting(logits_list: List[torch.Tensor]) torch.Tensor [source]
Aggregates logits from different sub-networks in the soft-voting manner, and returns the aggregated logits. Used for neural ensemble networks.
- Parameters
logits_list (List[Tensor]) – A list of logits from different networks.
- Returns
The aggregated logits (Tensor).
extorch.nn.modules
extorch.nn.modules.auxiliary
- class extorch.nn.modules.auxiliary.AuxiliaryHead(in_channels: int, num_classes: int)[source]
Bases:
torch.nn.modules.module.Module
Auxiliary head for the classification task on CIFAR datasets.
- Parameters
in_channels (int) – Number of channels in the input feature.
num_classes (int) – Number of classes.
- Examples::
>>> import torch >>> input = torch.randn((10, 3, 32, 32)) >>> module = AuxiliaryHead(3, 10) >>> output = module(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.auxiliary.AuxiliaryHeadImageNet(in_channels: int, num_classes: int)[source]
Bases:
torch.nn.modules.module.Module
Auxiliary head for the classification task on the ImageNet dataset.
- Parameters
in_channels (int) – Number of channels in the input feature.
num_classes (int) – Number of classes.
- Examples::
>>> import torch >>> input = torch.randn(10, 5, 32, 32) >>> module = AuxiliaryHeadImageNet(5, 10) >>> output = module(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
extorch.nn.modules.loss
Knowledge distillation loss proposed by Hinton (`Link`_). |
|
CrossEntropyLoss with mixup technique. |
|
Loss used by Deep Embedded Clustering (DEC, `Link`_). |
- class extorch.nn.modules.loss.CrossEntropyLabelSmooth(epsilon: float)[source]
Bases:
torch.nn.modules.module.Module
- forward(input: torch.Tensor, target: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.loss.CrossEntropyMixupLoss(alpha: float = 1.0, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
CrossEntropyLoss with mixup technique.
- Parameters
alpha (float) – Parameter of the beta distribution. Default: 1.0.
kwargs – Other arguments of torch.nn.CrossEntropyLoss (`Link`_).
- forward(input: torch.Tensor, target: torch.Tensor, net: torch.nn.modules.module.Module) torch.Tensor [source]
- Parameters
input (Tensor) – Input examples.
target (Tensor) – Label of the input examples.
net (nn.Module) – Network to calculate the loss.
- Returns
The loss.
- Return type
loss (Tensor)
- training: bool
- class extorch.nn.modules.loss.DECLoss(alpha: float = 1.0, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
Loss used by Deep Embedded Clustering (DEC, `Link`_).
- Parameters
alpha (float) – The degrees of freedom of the Student’s tdistribution. Default: 1.0.
- Examples::
>>> criterion = DECLoss(alpha = 1.) >>> embeddings = torch.randn((2, 10)) >>> centers = torch.randn((3, 10)) >>> loss = criterion(embeddings, centers)
- forward(input: torch.Tensor, centers: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.loss.HintonKDLoss(T: float, alpha: float, reduction: str = 'mean', **kwargs)[source]
Bases:
torch.nn.modules.loss.KLDivLoss
Knowledge distillation loss proposed by Hinton (`Link`_).
$L = (1 - alpha) * L_{CE}(P_s, Y_{gt}) + alpha * T^2 * L_{CE}(P_s, P_t)$
- Parameters
T (float) – Temperature parameter (>= 1.) used to smooth the softmax output.
alpha (float) – Trade-off coefficient between distillation and origin loss.
reduction (str) – Specifies the reduction to apply to the output. Default: “mean”.
kwargs – Other configurations for nn.CrossEntropyLoss.
- Examples::
>>> criterion = HintonKDLoss(T = 4., alpha = 0.9) >>> s_output = torch.randn((5, 10)) >>> t_output = torch.randn((5, 10)) >>> target = torch.ones(5, dtype = torch.long) >>> loss = criterion(s_output, t_output, target)
- forward(s_output: torch.Tensor, t_output: torch.Tensor, target: torch.Tensor) torch.Tensor [source]
- Parameters
s_output (Tensor) – Student network output.
t_output (Tensor) – Teacher network output.
target (Tensor) – Hard label of the input.
- Returns
The calculated loss.
- Return type
Tensor
- reduction: str
extorch.nn.modules.mlp
- class extorch.nn.modules.mlp.MLP(dim_in: int, dim_out: int, hiddens: Union[Tuple[int], List[int]], dropout: float = 0.0)[source]
Bases:
torch.nn.modules.module.Module
Basic muti-layer perception with relu as the activation function.
- Parameters
dim_in (int) – Input dimension.
dim_out (int) – Output dimension.
hiddens (Union[Tuple[int], List[int]]) – Hidden dimensions.
dropout (float) – Applied dropout rate.
- Examples::
>>> m = MLP(32, 20, (10, 10, 10), 0.1) >>> input = torch.randn(2, 32) # shape [2, 32] >>> output = m(input) # shape [2, 20]
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
extorch.nn.modules.operation
A batch-normalization layer followed by relu. |
|
A batch-normalization layer following relu. |
|
A convolution followed by batch-normalization and ReLU. |
|
A ReLU followed by convolution and batch-normalization. |
- class extorch.nn.modules.operation.BNReLU(in_channels: int, affine: bool = True)[source]
Bases:
torch.nn.modules.module.Module
A batch-normalization layer followed by relu.
- Parameters
in_channels (int) – Number of channels in the input image.
affine – A boolean value that when set to
True
, this module has learnable affine parameters. Default:True
.
- Examples::
>>> m = BNReLU(32, True) >>> input = torch.randn(2, 32, 10, 3) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.ConvBN(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, affine: bool = True, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
A convolution followed by batch-normalization.
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int or tuple) – Size of the convolving kernel.
stride (int or tuple, optional) – Stride of the convolution. Default: 1.
padding (int, tuple or str, optional) – Padding added to all four sides of e input. Default: 0.
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1.
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1.
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
.affine (bool) – A boolean value that when set to
True
, the batch-normalization layer has learnable affine parameters. Default:True
.kwargs – Other configurations of the convolution.
- Examples::
>>> m = ConvBN(3, 10, 3, 1) >>> input = torch.randn(2, 3, 32, 32) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.ConvBNReLU(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, affine: bool = True, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
A convolution followed by batch-normalization and ReLU.
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int or tuple) – Size of the convolving kernel.
stride (int or tuple, optional) – Stride of the convolution. Default: 1.
padding (int, tuple or str, optional) – Padding added to all four sides of e input. Default: 0.
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1.
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1.
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
.affine (bool) – A boolean value that when set to
True
, the batch-normalization layer has learnable affine parameters. Default:True
.kwargs – Other configurations of the convolution.
- Examples::
>>> m = ConvBNReLU(3, 10, 3, 1) >>> input = torch.randn(2, 3, 32, 32) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.ConvReLU(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: Optional[int] = 1, bias: Optional[bool] = True, inplace: Optional[bool] = False, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
A convolution followed by a ReLU.
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int or tuple) – Size of the convolving kernel.
stride (int or tuple, optional) – Stride of the convolution. Default: 1.
padding (int, tuple or str, optional) – Padding added to all four sides of e input. Default: 0.
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1.
groups (Optional[int]) – Number of blocked connections from input channels to output channels. Default: 1.
bias (Optional[bool]) – If
True
, adds a learnable bias to the output. Default:True
.inplace (Optional[bool]) – ReLU can optionally do the operation in-place. Default:
False
.kwargs – Other configurations of the convolution.
- Examples::
>>> m = ConvReLU(3, 10, 3, 1) >>> input = torch.randn(2, 3, 32, 32) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.Identity[source]
Bases:
torch.nn.modules.module.Module
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.ReLUBN(in_channels: int, affine: bool = True)[source]
Bases:
torch.nn.modules.module.Module
A batch-normalization layer following relu.
- Parameters
in_channels (int) – Number of channels in the input image.
affine – A boolean value that when set to
True
, this module has learnable affine parameters. Default:True
.
- Examples::
>>> m = ReLUBN(32, True) >>> input = torch.randn(2, 32, 10, 3) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.operation.ReLUConvBN(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, affine: bool = True, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
A ReLU followed by convolution and batch-normalization.
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int or tuple) – Size of the convolving kernel.
stride (int or tuple, optional) – Stride of the convolution. Default: 1.
padding (int, tuple or str, optional) – Padding added to all four sides of e input. Default: 0.
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1.
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1.
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
.affine (bool) – A boolean value that when set to
True
, the batch-normalization layer has learnable affine parameters. Default:True
.kwargs – Other configurations of the convolution.
- Examples::
>>> m = ReLUConvBN(3, 10, 3, 1) >>> input = torch.randn(2, 3, 32, 32) >>> output = m(input)
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
extorch.nn.modules.block
ResNet basic block (Link). |
|
ResNet bottleneck block (Link). |
- class extorch.nn.modules.block.ResNetBasicBlock(in_channels: int, out_channels: int, stride: int, kernel_size: int = 3, affine: bool = True)[source]
Bases:
torch.nn.modules.module.Module
ResNet basic block (Link).
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
stride (int) – Stride of the convolution.
kernel_size (int) – Size of the convolving kernel. Default: 3.
affine (bool) – A boolean value that when set to
True
, the batch-normalization layer has learnable affine parameters. Default:True
.
- Examples::
>>> m = ResNetBasicBlock(3, 10, 2, 3, True) >>> input = torch.randn(3, 3, 32, 32) >>> output = m(input)
- expansion = 1
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.nn.modules.block.ResNetBottleneckBlock(in_channels: int, out_channels: int, stride: int, affine: bool = True)[source]
Bases:
torch.nn.modules.module.Module
ResNet bottleneck block (Link).
- Parameters
in_channels (int) – Number of channels in the input image.
out_channels (int) – Number of channels produced by the convolution.
stride (int) – Stride of the convolution.
affine (bool) – A boolean value that when set to
True
, the batch-normalization layer has learnable affine parameters. Default:True
.
- Examples::
>>> m = ResNetBottleneckBlock(10, 10, 2, True) >>> input = torch.randn(2, 10, 32, 32) >>> output = m(input)
- expansion = 4
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
extorch.nn.init
- extorch.nn.init.normal_(module: torch.nn.modules.module.Module, conv_mean: float = 0.0, conv_std: float = 1.0, bn_mean: float = 0.0, bn_std: float = 1.0, linear_mean: float = 0.0, linear_std: float = 1.0) None [source]
Initialize the module with values drawn from the normal distribution.
- Parameters
module (nn.Module) – A pytorch module.
conv_mean (float) – The mean of the normal distribution for convolution.
conv_std (float) – The standard deviation of the normal distribution for convolution.
bn_mean (float) – The mean of the normal distribution for batch-normalization.
bn_std (float) – The standard deviation of the normal distribution for batch-normalization.
linear_mean (float) – The mean of the normal distribution for linear layers.
bn_std – The standard deviation of the normal distribution linear layers.
- Examples::
>>> import torch.nn as nn >>> module = nn.Sequential( nn.Conv2d(5, 10, 3), nn.BatchNorm2d(10), nn.ReLU() ) >>> module.apply(normal_)
extorch.nn.utils
- class extorch.nn.utils.WrapperModel(module: torch.nn.modules.module.Module, mean: torch.Tensor, std: torch.Tensor)[source]
Bases:
torch.nn.modules.module.Module
A wrapper model for computer vision tasks. Normalize the input before feed-forward.
- Parameters
module (nn.Module) – A network with input range [0., 1.].
mean (Tensor) – The mean value used for input transforms.
std (Tensor) – The standard value used for input transforms.
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- extorch.nn.utils.net_device(module: torch.nn.modules.module.Module) torch.device [source]
Get current device of the network, assuming all weights of the network are on the same device.
- Parameters
module (nn.Module) – The network.
- Returns
The device.
- Return type
torch.device
- Examples::
>>> module = nn.Conv2d(3, 3, 3) >>> device = net_device(module) # "cpu"
- extorch.nn.utils.use_params(module: torch.nn.modules.module.Module, params: collections.OrderedDict) None [source]
Replace the parameters in the module with the given ones. And then recover the old parameters.
- Parameters
module (nn.Module) – The targeted module.
params (OrderedDict) – The given parameters.
Examples
>>> m = nn.Conv2d(1, 10, 3) >>> params = m.state_dict() >>> for p in params.values(): >>> p.data = torch.zeros_like(p.data) >>> input = torch.ones((2, 1, 10)) >>> with use_params(m, params): >>> output = m(input)
extorch.nn.optim
extorch.nn.optim.lr_scheduler
Cosine annealing with restarts (Link). |
- class extorch.nn.optim.lr_scheduler.CosineWithRestarts(optimizer: torch.optim.optimizer.Optimizer, T_max: int, eta_min: float = 0.0, last_epoch: int = - 1, factor: float = 1.0)[source]
Bases:
torch.optim.lr_scheduler._LRScheduler
Cosine annealing with restarts (Link).
- Parameters
optimizer (Optimizer) – The optimizer.
t_max (int) – The maximum number of iterations within the first cycle.
eta_min (float) – The minimum learning rate. Default: 0.
last_epoch (int) – The index of the last epoch. This is used when restarting. Default: -1.
factor (float) – The factor by which the cycle length (T_max) increases after each restart. Default: 1.
- get_lr() List[float] [source]
Get updated learning rate.
- Returns
A list of current learning rates.
- Return type
lrs (List[float])
Note
We need to check if this is the first time
self.get_lr()
was called, sincetorch.optim.lr_scheduler._LRScheduler
will callself.get_lr()
when first initialized, but the learning rate should remain unchanged for the first epoch.
extorch.adversarial
extorch.adversarial.base
Base adversarial adversary. |
- class extorch.adversarial.base.BaseAdversary(use_eval_mode: bool = False)[source]
Bases:
torch.nn.modules.module.Module
Base adversarial adversary.
- Parameters
use_eval_mode (bool) – Whether use eval mode while generating adversarial examples. Default:
False
.
- forward(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: Optional[torch.Tensor] = None) torch.Tensor [source]
Generate adversarial examples.
- Parameters
net (nn.Module) – The victim network.
input (Tensor) – Origin input.
target (Tensor) – Label of the input.
output (Tensor) – Origin output. Default: None.
- Returns
The generated adversarial examples.
- Return type
adv_examples (Tensor)
- abstract generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor) torch.Tensor [source]
Adversarial example generation.
- Parameters
net (nn.Module) – The victim network.
input (Tensor) – Origin input.
target (Tensor) – Label of the input.
output (Tensor) – Origin output.
- Returns
The generated adversarial examples.
- Return type
adv_examples (Tensor)
- training: bool
extorch.adversarial.gradient_based
Gradient-based adversary. |
|
Project Gradient Descent (PGD, `Link`_) adversarial adversary. |
|
Customized Project Gradient Descent (PGD, `Link`_) adversarial adversary. |
|
Fast Gradient Sign Method (FGSM, `Link`_) adversarial adversary. |
|
Momentum Iterative Fast Gradient Sign Method with Input Diversity Method (M-DI-FGSM, 'Link'_). |
|
Momentum Iterative Fast Gradient Sign Method (MI-FGSM, `Link`_). |
|
Iterative Fast Gradient Sign Method (I-FGSM, `Link`_). |
|
Diversity input layer for M-DI-FGSM ('Link'_) adversarial adversary. |
- class extorch.adversarial.gradient_based.CustomizedPGDAdversary(epsilon: float, n_step: int, step_size: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.PGDAdversary
Customized Project Gradient Descent (PGD, `Link`_) adversarial adversary.
- generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor, epsilon_c: float) torch.Tensor [source]
- Parameters
epsilon_c (float) –
- training: bool
- class extorch.adversarial.gradient_based.DiversityLayer(target: int, p: float = 0.5)[source]
Bases:
torch.nn.modules.module.Module
Diversity input layer for M-DI-FGSM (‘Link’_) adversarial adversary.
- Parameters
target (int) – Target reshape size.
p (float) – Probability to apply the transformation. Default: 0.5.
- forward(input: torch.Tensor) torch.Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class extorch.adversarial.gradient_based.FGSMAdversary(epsilon: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.PGDAdversary
Fast Gradient Sign Method (FGSM, `Link`_) adversarial adversary.
- Parameters
epsilon (float) – Maximum distortion of adversarial example compared to origin input.
rand_init (bool) – Whether add random perturbation to origin input before formal attack.
mean (List[float]) – Sequence of means for each channel while normalizing the origin input.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.
criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- training: bool
- class extorch.adversarial.gradient_based.GradientBasedAdversary(criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.base.BaseAdversary
Gradient-based adversary.
- Parameters
criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- training: bool
- class extorch.adversarial.gradient_based.IFGSMAdversary(epsilon: float, n_step: int, rand_init: bool, mean: List[float], std: List[float], step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.MIFGSMAdversary
Iterative Fast Gradient Sign Method (I-FGSM, `Link`_).
- Parameters
epsilon (float) – Maximum distortion of adversarial example compared to origin input.
n_step (int) – Number of attack iterations.
rand_init (bool) – Whether add random perturbation to origin input before formal attack.
mean (List[float]) – Sequence of means for each channel while normalizing the origin input.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.
step_size (Optional[float]) – Step size for each attack iteration. If is not specified,
step_size = epsilon / n_step
. Default:None
.criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- training: bool
- class extorch.adversarial.gradient_based.MDIFGSMAdversary(epsilon: float, n_step: int, momentum: float, rand_init: bool, mean: List[float], std: List[float], diversity_p: float, target_size: int, step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.PGDAdversary
Momentum Iterative Fast Gradient Sign Method with Input Diversity Method (M-DI-FGSM, ‘Link’_).
- Parameters
epsilon (float) – Maximum distortion of adversarial example compared to origin input.
n_step (int) – Number of attack iterations.
momentum (float) – Gradient momentum.
rand_init (bool) – Whether add random perturbation to origin input before formal attack.
mean (List[float]) – Sequence of means for each channel while normalizing the origin input.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.
diversity_p (float) – Probability to apply the input diversity transformation.
target_size (int) – Randomly resized shape in the diversity input layer.
step_size (Optional[float]) – Step size for each attack iteration. If is not specified,
step_size = epsilon / n_step
. Default:None
.criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor) torch.Tensor [source]
Adversarial example generation.
- Parameters
net (nn.Module) – The victim network.
input (Tensor) – Origin input.
target (Tensor) – Label of the input.
output (Tensor) – Origin output.
- Returns
The generated adversarial examples.
- Return type
adv_examples (Tensor)
- training: bool
- class extorch.adversarial.gradient_based.MIFGSMAdversary(epsilon: float, n_step: int, momentum: float, rand_init: bool, mean: List[float], std: List[float], step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.MDIFGSMAdversary
Momentum Iterative Fast Gradient Sign Method (MI-FGSM, `Link`_).
- Parameters
epsilon (float) – Maximum distortion of adversarial example compared to origin input.
n_step (int) – Number of attack iterations.
momentum (float) – Gradient momentum.
rand_init (bool) – Whether add random perturbation to origin input before formal attack.
mean (List[float]) – Sequence of means for each channel while normalizing the origin input.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.
step_size (Optional[float]) – Step size for each attack iteration. If is not specified,
step_size = epsilon / n_step
. Default:None
.criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- training: bool
- class extorch.adversarial.gradient_based.PGDAdversary(epsilon: float, n_step: int, step_size: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]
Bases:
extorch.adversarial.gradient_based.GradientBasedAdversary
Project Gradient Descent (PGD, `Link`_) adversarial adversary.
- Parameters
epsilon (float) – Maximum distortion of adversarial example compared to origin input.
n_step (int) – Number of attack iterations.
step_size (float) – Step size for each attack iteration.
rand_init (bool) – Whether add random perturbation to origin input before formal attack.
mean (List[float]) – Sequence of means for each channel while normalizing the origin input.
std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.
criterion (Optional[_Loss]) – Criterion to calculate the loss. Default:
nn.CrossEntropyLoss
.use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default:
False
.
- generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor) torch.Tensor [source]
Adversarial example generation.
- Parameters
net (nn.Module) – The victim network.
input (Tensor) – Origin input.
target (Tensor) – Label of the input.
output (Tensor) – Origin output.
- Returns
The generated adversarial examples.
- Return type
adv_examples (Tensor)
- training: bool