Catch up on the latest AI articles

FakeParts: New Benchmark Reveals Partial Deep Fake Threats And Detection Limits

FakeParts: New Benchmark Reveals Partial Deep Fake Threats And Detection Limits

3 main points
✔️ FakeParts defines a new deep faking method that only modifies parts of videos
✔️ FakePartsBench is a dedicated partial modification detection benchmark with over 25,000 videos
✔️ Experiments show that both humans and state-of-the-art models largely fail to detect partial modifications

FakeParts: a New Family of AI-Generated DeepFakes
written by Gaetan BrisonSoobash DaibooSamy AimeurAwais Hussain SaniXi WangGianni FranchiVicky Kalogeiton
(Submitted on 28 Aug 2025)
Comments: Published on arxiv.
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)

The images used in this article are from the paper, the introductory slides, or were created based on them.

Summary

This paper proposes a new category of deep faking techniques called "FakeParts" that have been evolving in recent years and can pose a serious threat.
FakeParts refers to partial deep faking, where the modification is limited to a few spatial regions or temporal fragments rather than generating the entire video For example, subtle changes in facial expressions.

Examples include subtle changes in facial expression, object substitution, background modification, or interpolation of specific frames.
These partial manipulations are characterized by their ability to retain most of the original video while altering the definitive meaning, thus providing a strong sense of authenticity to the viewer.

The authors demonstrate that existing detection methods are able to handle fully synthetic video, but find it extremely difficult to detect such partial modifications.
On top of that, the authors constructed a new large-scale benchmark dataset "FakePartsBench" dedicated to FakeParts, which enables quantitative evaluation of detection accuracy.

This research reveals a critical blind spot in deep-fake detection and provides a foundation for future development of defense techniques.

Proposed Methodology

The authors designed a new benchmark called "FakePartsBench" to define FakeParts and enable their detection.
FakePartsBench consists of over 25,000 videos and covers three types of partial alterations: spatial, temporal and stylistic.

Spatial manipulations include face replacement and object deletion/completion (inpainting and outpainting), temporal manipulations cover frame interpolation, and stylistic manipulations cover color and texture modification.
Each video was precisely annotated at the pixel and frame level, allowing for fine-grained evaluation of detection methods.

For generation, a variety of realistic modification scenarios were reproduced, including the latest commercial and academic generation models such as Sora and Veo2.
Through this benchmarking, we exhaustively compared the performance of traditional detection models and highlighted their vulnerability to partial modification.

Experiments

The experiments used state-of-the-art image and video deep-fake detection models and evaluated their performance on FakePartsBench.
The models covered included image-based detectors such as CNNDetection, UnivFD, FatFormer, and C2P-CLIP, as well as video-based detectors such as DeMamba and AIGVDet.

The results showed that while all models maintained a certain level of detection accuracy for full composite video, the accuracy was significantly degraded for FakeParts.
Partial operations, such as inpainting and outpainting in particular, sometimes reduced the detection rate to 6-7%.

On the other hand, we confirmed that the CLIP-based detector is relatively strong against partial modifications, but conversely, it tends to be weak against highly accurate full-frame composite images.
Furthermore, in a user study of 80 people, even humans were prone to fail to identify partial modifications, with an overall accuracy of only 75.3%.

These results demonstrate that partial deep fakes are a serious threat to existing detection technologies and human perception.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us