Bad pen is better than good brain

Posted by Domain knowledge on January 4, 2022

Paper notes

Y. Xu, B. Du and L. Zhang, “Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification,” in IEEE Transactions on Image Processing, vol. 30, pp. 8671-8685, 2021, doi: 10.1109/TIP.2021.3118977.

这篇文章提出了一个自注意力全局上下文的网络,用来提高高光谱图像分割的鲁棒性.整体思路就是把图片空间信息记下来,周围像素点损失函数加到自己的损失函数上,这样使得图片的扰动必须更大才能使得分类出错.

A. Arnab, O. Miksik and P. H. S. Torr, “On the Robustness of Semantic Segmentation Models to Adversarial Attacks,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 12, pp. 3040-3053, 1 Dec. 2020, doi: 10.1109/TPAMI.2019.2919707.

这篇文章把对抗样本和图像分割联系起来了,之前我看的东西全是图片分类的,我没想到差距这么大:比如对抗样本的可迁移性在图像分割中就很差很差.

Xu Y , Du B , Zhang F , et al. Hyperspectral image classification via a random patches network[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 142(AUG.):344-357.

这个就是自己太菜了,之前没学过高光谱分割,然后就看了一篇比较经典的分割算法,提出的是随机块的概念.感觉挺神奇的,就是把原始图片的某些随机块做成了卷积层然后卷积,之后来了几个 PCA 和白化.这就是一层干的事,然后它加了好几层,之后把结果堆叠到一起了,效果竟然出奇的好.它的数学原理就是一定存在某些低维,能够把高维空间的点全部投影过来.

Use your gpu

model.cuda()
loss_fn.cuda()
data.cuda() #includa train data and test data
torch.save(Model, 'somename.pkl')
Model = torch.load('somename.pkl')