Camouflage Generation
Camouflage generation research focuses on computationally creating realistic camouflage patterns, primarily for adversarial attacks against object detection systems and for augmenting datasets to improve camouflaged object detection. Current methods leverage neural rendering techniques, often incorporating diffusion models and generative adversarial networks, to produce visually convincing camouflage that adapts to diverse environments and viewpoints. This work is significant for advancing both computer vision and security applications, improving the robustness of object detection algorithms while also highlighting vulnerabilities to adversarial attacks in autonomous systems and other vision-based technologies.
Papers
Location-Free Camouflage Generation Network
Yangyang Li, Wei Zhai, Yang Cao, Zheng-jun Zha
DTA: Physical Camouflage Attacks using Differentiable Transformation Network
Naufal Suryanto, Yongsu Kim, Hyoeun Kang, Harashta Tatimma Larasati, Youngyeo Yun, Thi-Thu-Huong Le, Hunmin Yang, Se-Yoon Oh, Howon Kim