无法使用训练有素的 NSGA-Net PyTorch 模型生成对抗性示例
Failed to generate adversarial examples using trained NSGA-Net PyTorch models
我使用 NSGA-Net 神经架构搜索来生成和训练多个架构。我正在尝试使用经过训练的 PyTorch 模型生成 PGD 对抗样本。我尝试同时使用 Adversarial Robustness Toolbox 1.3 (ART) 和 torchattacks 2.4,但我得到了同样的错误。
这几行代码描述了我的代码的主要功能以及我在这里要实现的目标:
# net is my trained NSGA-Net PyTorch model
# Defining PGA attack
pgd_attack = PGD(net, eps=4 / 255, alpha=2 / 255, steps=3)
# Creating adversarial examples using validation data and the defined PGD attack
for images, labels in valid_data:
images = pgd_attack(images, labels).cuda()
outputs = net(images)
所以错误通常是这样的:
Traceback (most recent call last):
File "torch-attacks.py", line 296, in <module>
main()
File "torch-attacks.py", line 254, in main
images = pgd_attack(images, labels).cuda()
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attack.py", line 114, in __call__
images = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attacks\pgd.py", line 57, in forward
outputs = self.model(adv_images)
File "\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_models.py", line 79, in forward
x = self.gap(self.model(x))
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_decoder.py", line 978, in forward
x = self.first_conv(x)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'weight' in call to _thnn_conv2d_forward
我在一个简单的 PyTorch 模型中使用了相同的代码并且它有效,但我使用的是 NSGA-Net,所以我没有自己设计模型。我也尝试在模型和输入上使用 .float()
,但仍然出现相同的错误。
请记住,我只能访问以下文件:
- 手电筒-attacks.py
- macro_models.py
- macro_decoder.py
您应该将 images
转换为所需的类型(在您的情况下为 images.float()
)。标签必须不能转换为任何浮动类型。它们可以是 int
或 long
张量。
我使用 NSGA-Net 神经架构搜索来生成和训练多个架构。我正在尝试使用经过训练的 PyTorch 模型生成 PGD 对抗样本。我尝试同时使用 Adversarial Robustness Toolbox 1.3 (ART) 和 torchattacks 2.4,但我得到了同样的错误。
这几行代码描述了我的代码的主要功能以及我在这里要实现的目标:
# net is my trained NSGA-Net PyTorch model
# Defining PGA attack
pgd_attack = PGD(net, eps=4 / 255, alpha=2 / 255, steps=3)
# Creating adversarial examples using validation data and the defined PGD attack
for images, labels in valid_data:
images = pgd_attack(images, labels).cuda()
outputs = net(images)
所以错误通常是这样的:
Traceback (most recent call last):
File "torch-attacks.py", line 296, in <module>
main()
File "torch-attacks.py", line 254, in main
images = pgd_attack(images, labels).cuda()
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attack.py", line 114, in __call__
images = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attacks\pgd.py", line 57, in forward
outputs = self.model(adv_images)
File "\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_models.py", line 79, in forward
x = self.gap(self.model(x))
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_decoder.py", line 978, in forward
x = self.first_conv(x)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'weight' in call to _thnn_conv2d_forward
我在一个简单的 PyTorch 模型中使用了相同的代码并且它有效,但我使用的是 NSGA-Net,所以我没有自己设计模型。我也尝试在模型和输入上使用 .float()
,但仍然出现相同的错误。
请记住,我只能访问以下文件:
- 手电筒-attacks.py
- macro_models.py
- macro_decoder.py
您应该将 images
转换为所需的类型(在您的情况下为 images.float()
)。标签必须不能转换为任何浮动类型。它们可以是 int
或 long
张量。