← All projects

Beyond Binary Classification: A Semi-supervised Approach to Generalized AI-generated Image Detection

AAAI 2026
University College Dublin  ·  Trinity College Dublin  ·  University of Science, HCMC, Vietnam
TriDetect framework overview
Figure 1. Overview of the TriDetect framework. TriDetect enhances binary real/fake classification by discovering latent architectural patterns within the "fake" class using balanced cluster assignment via the Sinkhorn-Knopp algorithm and cross-view consistency, encouraging the model to learn fundamental architectural distinctions.

Abstract

The rapid advancement of generators (e.g., StyleGAN, Midjourney, DALL-E) has produced highly realistic synthetic images, posing significant challenges to digital media authenticity. These generators are typically based on a few core architectural families, primarily Generative Adversarial Networks (GANs) and Diffusion Models (DMs). A critical vulnerability in current forensics is the failure of detectors to achieve cross-generator generalization, especially when crossing architectural boundaries (e.g., from GANs to DMs).

We hypothesize that this gap stems from fundamental differences in the artifacts produced by these distinct architectures. In this work, we provide a theoretical analysis explaining how the distinct optimization objectives of GAN and DM architectures lead to different manifold coverage behaviors. We demonstrate that GANs permit partial coverage, often leading to boundary artifacts, while DMs enforce complete coverage, resulting in over-smoothing patterns. Motivated by this analysis, we propose TriDetect (Triarchy Detect), a semi-supervised approach that enhances binary classification by discovering latent architectural patterns within the "fake" class.

Motivation

Current deepfake detectors treat all AI-generated images as a single "fake" class. But generators based on different architectures produce fundamentally different artifacts:

GAN Artifacts

Boundary Artifacts from Partial Coverage

GANs minimize the Jensen-Shannon divergence $D_{JS}(p_{data} \| p_{GAN})$, which remains finite even when $S_{GAN} \subset S_{data}$. This permits partial manifold coverage, leading to characteristic boundary artifacts at the edges of the generated distribution.

DM Artifacts

Over-smoothing from Complete Coverage

DMs minimize the KL divergence $D_{KL}(p_{data} \| p_{DM})$, which diverges to infinity if $p_{DM}(x) = 0$ anywhere where $p_{data}(x) > 0$. This forces DMs to achieve complete manifold coverage, resulting in over-smoothing patterns as they spread probability mass across the entire data support.

Key Insight: The cross-generator generalization gap stems from fundamentally different optimization objectives. GANs and DMs produce structurally different artifacts. By discovering these latent architectural patterns within the "fake" class, detectors can learn to generalize across unseen generators from the same architectural family.

Theoretical Foundation

Different Optimization, Different Artifacts

We prove two key theorems establishing the theoretical basis for why GANs and DMs produce different artifacts:

Theorem 1

Distinct Optimization Objectives

GANs minimize $D_{JS}(p_{data} \| p_{GAN})$ while DMs minimize $D_{KL}(p_{data} \| p_{DM})$. These fundamentally different divergence measures lead to different convergence behaviors and artifact patterns.

Theorem 2

Different Manifold Coverage

JS divergence remains finite for partial coverage ($S_{GAN} \subset S_{data}$), while KL divergence diverges to infinity if DM support is incomplete. Consequently, GANs can achieve optimal solutions with partial coverage, while DMs must cover the entire data manifold.

Manifold coverage comparison between GANs and DMs
Figure 2. Visualization of learned representations demonstrating successful discovery of fake sub-types. The three t-SNE projections display feature embeddings colored by (left) the model’s unsupervised cluster assignments, (middle) the model’s binary real/fake predictions, and (right) the ground-truth generation methods. Results are performed on AIGCDetectBenchmark.

TriDetect Framework

TriDetect employs a semi-supervised approach that goes beyond binary real/fake classification by discovering latent architectural subcategories within the fake class:

Component 1

Balanced Cluster Assignment via Sinkhorn-Knopp

Instead of treating all fake images uniformly, TriDetect discovers latent clusters within the fake class that correspond to different generator architectures. The Sinkhorn-Knopp algorithm ensures balanced cluster assignments, preventing degenerate solutions where all samples collapse into a single cluster.

Component 2

Cross-view Consistency

TriDetect enforces consistency between different augmented views of the same image, encouraging the model to learn robust architectural fingerprints rather than superficial image-level features. This cross-view mechanism ensures that the discovered clusters capture fundamental generation-process distinctions.

Component 3

Triarchy Classification

The framework establishes a three-way classification: Real, GAN-generated, and DM-generated. By learning to distinguish these architectural families during training, the model generalizes to unseen generators from the same family at test time.

Main Results

TriDetect is evaluated on two standard benchmarks and three in-the-wild datasets against 13 baselines.

Comparison on AIGCDetectBenchmark (ACC)

Accuracy across 16 generators spanning both GANs and diffusion models:

Table 1 · ACC on AIGCDetectBenchmark (selected generators + average)
MethodCycleGANProGANBigGANADMWukongGlideMidJourneyDALLE2Avg
CNNSpot0.49740.49750.48580.51700.96580.58820.53440.58100.6059
FreDect0.50490.54050.70830.53680.94430.55330.54730.55300.6434
CORE0.50610.50830.50630.56840.96430.94980.52980.59200.6441
UnivFD0.88120.73490.82730.72120.84690.94820.75830.87150.8299
NPR0.73540.89860.69930.77450.91770.97510.77480.96350.8472
Effort0.93870.90200.98630.58720.98780.79420.74110.75250.8804
TriDetect0.99740.99090.97600.74820.99630.94880.74800.94050.9152
TriDetect achieves the best average ACC (0.9152) across all 16 generators, outperforming the second-best Effort (0.8804) by 3.5%.

Comparison on WildFake (AUC)

AUC performance on the in-the-wild WildFake dataset across 10 generators:

Table 2 · AUC on WildFake dataset
MethodDALL-EDDIMDDPMVQDMBigGANStarGANStyleGANDF-GANGALIPGigaGANAvg
CNNSpot0.82200.59430.33750.37060.95130.50030.46130.53040.51430.42850.5511
CORE0.92130.70880.57950.87230.92240.72980.58790.91040.78760.71920.7739
UnivFD0.58570.80080.78730.78020.87110.87790.61560.95970.92570.84170.8046
NPR0.80560.90630.79060.93390.91280.80220.51610.92330.70180.82760.8120
Effort0.85370.84860.71970.93720.95990.90230.76050.99940.93430.90130.8817
TriDetect0.91890.88230.71060.97870.98021.00000.72451.00000.97740.99810.9171
TriDetect achieves the best average AUC (0.9171) on in-the-wild data, with perfect scores (1.0) on StarGAN and DF-GAN. Outperforms second-best Effort (0.8817) by 3.5%.

Key Findings

Finding 1

Superior Cross-generator Generalization

TriDetect significantly outperforms all 13 baselines on average across both benchmarks, particularly when crossing architectural boundaries (e.g., trained on GANs, tested on DMs like ADM, Wukong, VQDM).

Finding 2

Effective on In-the-wild Data

On the WildFake dataset, TriDetect achieves perfect or near-perfect detection on several generators (StarGAN: 1.0, DF-GAN: 1.0, GigaGAN: 0.998) where many baselines fail.

Finding 3

Theoretically Grounded

The empirical results validate the theoretical analysis: detectors that learn to distinguish GAN vs DM artifacts achieve better generalization than those treating all fakes uniformly.

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{nguyenle2026tridetect,
  title     = {Beyond Binary Classification: A Semi-supervised
               Approach to Generalized AI-generated Image
               Detection},
  author    = {Nguyen-Le, Hong-Hanh and Tran, Van-Tuan
               and Nguyen, Dinh-Thuc and Le-Khac, Nhien-An},
  booktitle = {Proceedings of the AAAI Conference on
               Artificial Intelligence (AAAI-26)},
  year      = {2026}
}

Acknowledgments

This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183.