Skip to content

Ekko-zn/AIGCDetectBenchmark

Repository files navigation

A Comprehensive Benchmark for AI-generated Image Detection

News

❗ [2024-03-08] Our paper "Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection" has changed name into

- "PatchCraft: Exploring Texture Patch for Efficient AI-generated Image Detection"

📝 [2024-03-08] We add the images generated by SDXL-base-1.0 into our test set, and test the performance of all detection methods on SDXL set. For more information, please trun to Evaluation and Doc

📝 [2024-03-08] Update the collection of AIGC-Detection papers published from 2020 onwards. Awesome-AIGCDetection

📝 [2024-02-19] Add AIGC-Detection Dataset papers. Awesome-AIGCDetection

Collected Methods

method paper test code train code
CNNSpot CNN-generated images are surprisingly easy to spot...for now
FreDect Leveraging Frequency Analysis for Deep Fake Image Recognition
Fusing Fusing global and local features for generalized AI-synthesized image detection
Gram-Net Global Texture Enhancement for Fake Face Detection In the Wild
LGrad Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection
LNP Detecting Generated Images by Real Images
DIRE DIRE for Diffusion-Generated Image Detection
UnivFD Towards Universal Fake Image Detectors that Generalize Across Generative Models
PatchCraft PatchCraft: Exploring Texture Patch for Efficient AI-generated Image Detection ⚙️ ⚙️

Setup

Refer to CNNSpot and Dire

Training

For LGrad, LNP and DIRE, we recommand you use files gen_imggrad.py, test_sidd_rgb_test.py and compute_dire.py in folder preprocessing to get processed images. Then use the processed images to train a ResNet-50 classifier (like CNNSpot).

  1. CNNSpot, FreDect, Fusing, Gram-Net
    python train.py --name test --dataroot [your data path] --detect_method [CNNSpot, FreDect, Fusing, Gram] --blur_prob 0.1 --blur_sig 0.0,3.0 --jpg_prob 0.1 --jpg_method cv2,pil --jpg_qual 30,100 
    
  2. LGrad
    sh preprocessing/LGrad/transform_img2grad.sh # change file paths
    python train.py --name test --dataroot [your data path] --detect_method CNNSpot --blur_prob 0 --jpg_prob 0
    
  3. LNP
    python preprocessing/LNP/test_sidd_rgb_test.py --input_dir [your data path] --result_dir [your data path]  # change file paths
    python train.py --name test --dataroot [your data path] --detect_method CNNSpot --blur_prob 0 --jpg_prob 0
    
  4. DIRE
    sh preprocessing/DIRE/compute_dire.sh  # change file paths
    python train.py --name test --dataroot [your data path] --detect_method CNNSpot --blur_prob 0 --jpg_prob 0
    
  5. UnivFD
    python train.py --name test --dataroot [your data path] --detect_method UnivFD --fix_backbone --blur_prob 0.1 --blur_sig 0.0,3.0 --jpg_prob 0.1 --jpg_method cv2,pil --jpg_qual 30,100 
    

Test on datasets

usage: eval_all.py [-h] [--rz_interp RZ_INTERP] [--blur_sig BLUR_SIG] [--jpg_method JPG_METHOD] [--jpg_qual JPG_QUAL] [--batch_size BATCH_SIZE] [--loadSize LOADSIZE] [--CropSize CROPSIZE] [--no_crop]
                   [--no_resize] [--no_flip] [--model_path MODEL_PATH] [--detect_method DETECT_METHOD] [--noise_type NOISE_TYPE] [--LNP_modelpath LNP_MODELPATH] [--DIRE_modelpath DIRE_MODELPATH]
                   [--LGrad_modelpath LGRAD_MODELPATH]

options:
  -h, --help            show this help message and exit
  --rz_interp RZ_INTERP
  --blur_sig BLUR_SIG
  --jpg_method JPG_METHOD
  --jpg_qual JPG_QUAL
  --batch_size BATCH_SIZE
                        input batch size (default: 64)
  --loadSize LOADSIZE   scale images to this size (default: 256)
  --CropSize CROPSIZE   scale images to this size (default: 224)
  --no_crop             if specified, do not crop the images for data augmentation (default: False)
  --no_resize           if specified, do not resize the images for data augmentation (default: False)
  --no_flip             if specified, do not flip the images for data augmentation (default: False)
  --model_path MODEL_PATH
                        the path of detection model (default: ./weights/CNNSpot.pth)
  --detect_method DETECT_METHOD
                        choose the detection method (default: CNNSpot)
  --noise_type NOISE_TYPE
                        such as jpg, blur and resize (default: None)
  --LNP_modelpath LNP_MODELPATH
                        the path of LNP pre-trained model (default: ./weights/sidd_rgb.pth)
  --DIRE_modelpath DIRE_MODELPATH
                        the path of DIRE pre-trained model (default: ./weights/lsun_bedroom.pt)
  --LGrad_modelpath LGRAD_MODELPATH
                        the path of LGrad pre-trained model (default: ./weights/karras2019stylegan-bedrooms-256x256_discriminator.pth)

❗ You should set your dataroot and dataset name in eval_config.py

All pre-trained detection models and necessary pre-processing models are available in ./weights

For example, if you want to evaluate the performance of CNNSpot under blurring.

python eval_all.py --model_path ./weights/CNNSpot.pth --detect_method CNNSpot  --noise_type blur --blur_sig 1.0 --no_resize --no_crop --batch_size 1

Dataset

Training Set

We adopt the training set in CNNSpot, you can download it form link

Test Set and Checkpoints

The whole test set and checkpoints we used in our experiments can be downloaded from BaiduNetdisk or Google Drive

Acknowledgments

Our code is developed based on CNNDetection, FreDect, Fusing, Gram-Net, LGrad, LNP, DIRE, UnivFD . Thanks for their sharing codes and models.:heart:

Citation

If you find this repository useful for your research, please consider citing this bibtex.

@article{rptc,
  title={Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection},
  author={Zhong, Nan and Xu, Yiran and Qian, Zhenxing and Zhang, Xinpeng},
  journal={arXiv preprint arXiv:2311.12397},
  year={2023}
}