dc.contributor.author |
Sheth, Kinjal Ravi |
|
dc.contributor.author |
Dr. Vishal S., Vora |
|
dc.date.accessioned |
2024-11-20T06:31:18Z |
|
dc.date.available |
2024-11-20T06:31:18Z |
|
dc.date.issued |
2024-10-03 |
|
dc.identifier.citation |
Sheth, K. R., Dr. V. S. Vora (2024). An intelligent approach to detect facial retouching using Fine Tuned VGG16. International Journal of Biometrics, 16(6), 583 – 600, DOI: 10.1504/IJBM.2024.141937 |
en_US |
dc.identifier.uri |
http://10.9.150.37:8080/dspace//handle/atmiyauni/1756 |
|
dc.description.abstract |
It is a common practice to digitally edit or ‘retouch’ facial images for various purposes, such as enhancing one’s appearance on social media, matrimonial sites, or even as an authentic proof. When regulations are not strictly enforced, it becomes easy to manipulate digital data, as editing tools are readily available. In this paper, we apply a transfer learning approach by fine-tuning a pre-trained VGG16 model with ImageNet weight to classify the retouched face images of standard ND-IIITD faces dataset. Furthermore, this study places a strong emphasis on the selection of optimisers employed during both the training and fine-tuning stages of the model to achieve quicker convergence and enhanced overall performance. Our work achieves impressive results, with a training accuracy of 99.54% and a validation accuracy of 98.98% for the TL vgg16 and RMSprop optimiser. Moreover, it attains an overall accuracy of 97.92% in the two-class (real and retouching) classification for the ND-IIITD dataset. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
International Journal of Biometrics |
en_US |
dc.relation.ispartofseries |
16;6 |
|
dc.subject |
Adam |
en_US |
dc.subject |
Retouching |
en_US |
dc.subject |
RMSprop |
en_US |
dc.subject |
transfer learning |
en_US |
dc.subject |
TL |
en_US |
dc.subject |
VGG16 |
en_US |
dc.title |
An intelligent approach to detect facial retouching using Fine Tuned VGG16 |
en_US |
dc.type |
Article |
en_US |