Academic Research Library

Find some of the best Journals and Proceedings.

Investigation of AI-Tampered Image Detection Based on Generative Adversarial Networks

Author : Yi-Chung Cheng, Hui-Chi Chuang, Yun-Chen Cheng, Chih-Chuan Chen

Abstract :As online activity increases, the value of personal information rises, making information security crucial to maintaining privacy, social trust, and stability. This study develops a generative adversarial model to simulate malicious attacks, where a generator produces realistic adversarial samples, and a discriminator distinguishes between real and generated data. Through adversarial training, the generator iteratively improves, creating samples that superficially resemble real data but contain subtle perturbations. These adversarial samples are then classified by a target model; misclassification indicates a successful attack, exposing security vulnerabilities. A convolutional neural network (CNN) serves as the target model, trained and tested on images with accuracy evaluation. The Generative Adversarial Network (GAN) comprises a generator and a discriminator: the generator applies convolution, deconvolution, and ResNet blocks to enhance feature learning and generate adversarial perturbations, while the discriminator employs multiple convolutional layers to differentiate between real and adversarial samples. During training, adversarial samples are generated with controlled perturbations, and the discriminator updates its weights to improve detection. The model incorporates three loss functions—C&W, cross-entropy, and MSE—with experimental adjustments to optimize performance. Results show that the C&W loss function produces the most effective adversarial samples, yielding superior attack success rates.

Keywords :Adversarial samples, convolutional neural network, generative adversarial model, information security.

Conference Name :International Conference on Business and Artificial Intelligence Technologies (ICBAIT-25)

Conference Place Tokyo, Japan

Conference Date 9th Jul 2025

Preview