Abstract:
It is a common practice to digitally edit or ‘retouch’ facial images for various purposes, such as enhancing one’s appearance on social media, matrimonial sites, or even as an authentic proof. When regulations are not strictly enforced, it becomes easy to manipulate digital data, as editing tools are readily available. In this paper, we apply a transfer learning approach by fine-tuning a pre-trained VGG16 model with ImageNet weight to classify the retouched face images of standard ND-IIITD faces dataset. Furthermore, this study places a strong emphasis on the selection of optimisers employed during both the training and fine-tuning stages of the model to achieve quicker convergence and enhanced overall performance. Our work achieves impressive results, with a training accuracy of 99.54% and a validation accuracy of 98.98% for the TL vgg16 and RMSprop optimiser. Moreover, it attains an overall accuracy of 97.92% in the two-class (real and retouching) classification for the ND-IIITD dataset.