extorch.adversarial.gradient_based

GradientBasedAdversary

Gradient-based adversary.

PGDAdversary

Project Gradient Descent (PGD, `Link`_) adversarial adversary.

CustomizedPGDAdversary

Customized Project Gradient Descent (PGD, `Link`_) adversarial adversary.

FGSMAdversary

Fast Gradient Sign Method (FGSM, `Link`_) adversarial adversary.

MDIFGSMAdversary

Momentum Iterative Fast Gradient Sign Method with Input Diversity Method (M-DI-FGSM, 'Link'_).

MIFGSMAdversary

Momentum Iterative Fast Gradient Sign Method (MI-FGSM, `Link`_).

IFGSMAdversary

Iterative Fast Gradient Sign Method (I-FGSM, `Link`_).

DiversityLayer

Diversity input layer for M-DI-FGSM ('Link'_) adversarial adversary.

class extorch.adversarial.gradient_based.CustomizedPGDAdversary(epsilon: float, n_step: int, step_size: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.PGDAdversary

Customized Project Gradient Descent (PGD, `Link`_) adversarial adversary.

generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor, epsilon_c: float) torch.Tensor[source]
Parameters

epsilon_c (float) –

training: bool
class extorch.adversarial.gradient_based.DiversityLayer(target: int, p: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

Diversity input layer for M-DI-FGSM (‘Link’_) adversarial adversary.

Parameters
  • target (int) – Target reshape size.

  • p (float) – Probability to apply the transformation. Default: 0.5.

forward(input: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class extorch.adversarial.gradient_based.FGSMAdversary(epsilon: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.PGDAdversary

Fast Gradient Sign Method (FGSM, `Link`_) adversarial adversary.

Parameters
  • epsilon (float) – Maximum distortion of adversarial example compared to origin input.

  • rand_init (bool) – Whether add random perturbation to origin input before formal attack.

  • mean (List[float]) – Sequence of means for each channel while normalizing the origin input.

  • std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.

  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

training: bool
class extorch.adversarial.gradient_based.GradientBasedAdversary(criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.base.BaseAdversary

Gradient-based adversary.

Parameters
  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

training: bool
class extorch.adversarial.gradient_based.IFGSMAdversary(epsilon: float, n_step: int, rand_init: bool, mean: List[float], std: List[float], step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.MIFGSMAdversary

Iterative Fast Gradient Sign Method (I-FGSM, `Link`_).

Parameters
  • epsilon (float) – Maximum distortion of adversarial example compared to origin input.

  • n_step (int) – Number of attack iterations.

  • rand_init (bool) – Whether add random perturbation to origin input before formal attack.

  • mean (List[float]) – Sequence of means for each channel while normalizing the origin input.

  • std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.

  • step_size (Optional[float]) – Step size for each attack iteration. If is not specified, step_size = epsilon / n_step. Default: None.

  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

training: bool
class extorch.adversarial.gradient_based.MDIFGSMAdversary(epsilon: float, n_step: int, momentum: float, rand_init: bool, mean: List[float], std: List[float], diversity_p: float, target_size: int, step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.PGDAdversary

Momentum Iterative Fast Gradient Sign Method with Input Diversity Method (M-DI-FGSM, ‘Link’_).

Parameters
  • epsilon (float) – Maximum distortion of adversarial example compared to origin input.

  • n_step (int) – Number of attack iterations.

  • momentum (float) – Gradient momentum.

  • rand_init (bool) – Whether add random perturbation to origin input before formal attack.

  • mean (List[float]) – Sequence of means for each channel while normalizing the origin input.

  • std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.

  • diversity_p (float) – Probability to apply the input diversity transformation.

  • target_size (int) – Randomly resized shape in the diversity input layer.

  • step_size (Optional[float]) – Step size for each attack iteration. If is not specified, step_size = epsilon / n_step. Default: None.

  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor) torch.Tensor[source]

Adversarial example generation.

Parameters
  • net (nn.Module) – The victim network.

  • input (Tensor) – Origin input.

  • target (Tensor) – Label of the input.

  • output (Tensor) – Origin output.

Returns

The generated adversarial examples.

Return type

adv_examples (Tensor)

training: bool
class extorch.adversarial.gradient_based.MIFGSMAdversary(epsilon: float, n_step: int, momentum: float, rand_init: bool, mean: List[float], std: List[float], step_size: Optional[float] = None, criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.MDIFGSMAdversary

Momentum Iterative Fast Gradient Sign Method (MI-FGSM, `Link`_).

Parameters
  • epsilon (float) – Maximum distortion of adversarial example compared to origin input.

  • n_step (int) – Number of attack iterations.

  • momentum (float) – Gradient momentum.

  • rand_init (bool) – Whether add random perturbation to origin input before formal attack.

  • mean (List[float]) – Sequence of means for each channel while normalizing the origin input.

  • std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.

  • step_size (Optional[float]) – Step size for each attack iteration. If is not specified, step_size = epsilon / n_step. Default: None.

  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

training: bool
class extorch.adversarial.gradient_based.PGDAdversary(epsilon: float, n_step: int, step_size: float, rand_init: bool, mean: List[float], std: List[float], criterion: Optional[torch.nn.modules.loss._Loss] = CrossEntropyLoss(), use_eval_mode: bool = False)[source]

Bases: extorch.adversarial.gradient_based.GradientBasedAdversary

Project Gradient Descent (PGD, `Link`_) adversarial adversary.

Parameters
  • epsilon (float) – Maximum distortion of adversarial example compared to origin input.

  • n_step (int) – Number of attack iterations.

  • step_size (float) – Step size for each attack iteration.

  • rand_init (bool) – Whether add random perturbation to origin input before formal attack.

  • mean (List[float]) – Sequence of means for each channel while normalizing the origin input.

  • std (List[float]) – Sequence of standard deviations for each channel while normalizing the origin input.

  • criterion (Optional[_Loss]) – Criterion to calculate the loss. Default: nn.CrossEntropyLoss.

  • use_eval_mode (bool) – Whether use eval mode of the network while running attack. Default: False.

generate_adv(net: torch.nn.modules.module.Module, input: torch.Tensor, target: torch.Tensor, output: torch.Tensor) torch.Tensor[source]

Adversarial example generation.

Parameters
  • net (nn.Module) – The victim network.

  • input (Tensor) – Origin input.

  • target (Tensor) – Label of the input.

  • output (Tensor) – Origin output.

Returns

The generated adversarial examples.

Return type

adv_examples (Tensor)

training: bool