Although it is true that document forgery is not something of the new era, the advance of technology has allowed the development of advanced techniques to detect sophisticated forgeries. This is why CTP-Net has emerged as a response to these challenges, being the new tool to locate forgeries in document images.
CTP-Net is a deep learning model designed to detect and locate forged regions in document images. It uses optical character recognition to capture the texture of characters and combines the texture features of character regions and the entire image, achieving accurate detection and localization of forged areas.
To evaluate its effectiveness, the team tested on two main datasets: FCTM and SACP. The first set, FCTM, contains images of genuine and counterfeit trademarks. On the other hand, the second set, SACP, includes a variety of counterfeit images in different categories, including contracts and trademark licenses.
A split of the data into training, validation and testing in an 8:1:1 ratio was used to train CTP-Net. Using the cross-entropy function as a loss function and adopting the SGD optimizer.
CTP-Net demonstrated outperformance compared to other models in terms of metrics such as F1-score, IoU, MCC and AUC. The model's ability to focus on texture features in character regions and reveal traces of forgery around text was key to its success.
Not all is rosy, CTP-Net also faces challenges, including the detection and localization of small-scale counterfeit areas across the entire image.
The model offers a technical solution that not only has the ability to detect and locate forgeries, but also represents a significant step forward in the fight against document forgery in the digital age and offers a path for future digital tools that are one step ahead of forgers. Ensuring the authenticity of documents in such a digitized world has become a necessity.
This breakthrough was made jointly by the School of Computer Science and Electronic Engineering at Hunan University and the Department of Computer Science at the University of Hong Kong.
Read more at https://arxiv.org/pdf/2308.02158.pdf
05 de Septiembre, 2023