Breaking the Mirage: Deepfake Exposure through Multichannel Image Preprocessing
Naufil Shaikh Shaikh
Paper Contents
Abstract
Deepfakes are getting harder to spot as the technology behind them improves, and thats a real problem. Whether theyre used for spreading fake news or impersonating people, AI-generated faces are becoming a major threat to trust in digital content. In response, we created Mirage: Shattered Realities a system designed to detect these fake faces with higher accuracy. Our method starts with some image preprocessing. We use CLAHE (Contrast Limited Adaptive Histogram Equalization)6 to improve the contrast in facial regions, especially in tricky lighting, and Canny edge detection7 to sharpen the outlines of features like the faces edges and contours. These steps help highlight differences that might seem subtle at first but are often key to telling a real face from a fake one. Once the image is preprocessed, we use the Xception 5 model for classification. This deep learning model is great at finding patterns in images, and we trained it on a huge dataset of over 100,000 real and fake faces. The model learns to spot tiny inconsistencies like weird textures, odd lighting, or unnatural edges that can give away a deepfake. One of the best things about our approach is that it works well across a variety of deepfakes. By combining classic image processing with modern machine learning, weve made a system that not only performs well but also provides understandable results. This means we can see exactly why an image was flagged as real or fake, which makes the tool more trustworthy.
Copyright
Copyright © 2025 Naufil Shaikh. This is an open access article distributed under the Creative Commons Attribution License.