Multimodal feature inputs enable improved automated textile identification

Magken George Enow Gnoupa, Andy T. Augousti, Olga Duran, Olena Lanets, Solomiia Liaskovska

Research output: Contribution to journalArticlepeer-review

Abstract

This study presents an advanced framework for fabric texture classification by leveraging macro- and micro-texture extraction techniques integrated with deep learning architectures. Co-occurrence histograms, local binary patterns (LBPs), and albedo-dependent feature maps were employed to comprehensively capture the surface properties of fabrics. A late fusion approach was applied using four state-of-the-art convolutional neural networks (CNNs): InceptionV3, ResNet50_V2, DenseNet, and VGG-19. Excellent results were obtained, with the ResNet50_V2 achieving a precision of 0.929, recall of 0.914, and F1 score of 0.913. Notably, the integration of multimodal inputs allowed the models to effectively distinguish challenging fabric types, such as cotton–polyester and satin–silk pairs, which exhibit overlapping texture characteristics. This research not only enhances the accuracy of textile classification but also provides a robust methodology for material analysis, with significant implications for industrial applications in fashion, quality control, and robotics.
Original languageEnglish
Article number31
Number of pages18
JournalTextiles
Volume5
Issue number3
Early online date2 Aug 2025
DOIs
Publication statusPublished - 2 Aug 2025

Fingerprint

Dive into the research topics of 'Multimodal feature inputs enable improved automated textile identification'. Together they form a unique fingerprint.

Cite this