Next Article in Journal
A Novel Algorithm for Merging Bayesian Networks
Next Article in Special Issue
Symmetric Enhancement of Visual Clarity through a Multi-Scale Dilated Residual Recurrent Network Approach for Image Deraining
Previous Article in Journal
Symmetric Perfect and Symmetric Semiperfect Colorings of Groups
Previous Article in Special Issue
Anomaly Detection in Chest X-rays Based on Dual-Attention Mechanism and Multi-Scale Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Perception of Visual–Tactile and Color–Texture Features of Footwear Leather for Symmetric Shoes

1
The Graduate Institute of Design Science, Tatung University, Taipei 104, Taiwan
2
The Department of Industrial Design, Tatung University, Taipei 104, Taiwan
3
International College, Tunghai University, Taichung 407, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 6 July 2023 / Revised: 19 July 2023 / Accepted: 20 July 2023 / Published: 22 July 2023
(This article belongs to the Special Issue Symmetry/Asymmetry in Computer Vision and Image Processing)

Abstract

:
The study applies Kansei engineering in analyzing the color and texture of leather footwear, utilizing neural network verification to mirror consumers’ visual and tactile imageries onto varieties of leather. This aids in the development of an advanced system for selecting leather footwear based on such impressions. Initially, representative word pairs denoting consumers’ visual and tactile perceptions of leather footwear were delineated. Post-evaluation of these perceptions through a sensibility assessment questionnaire was administered, using 54 samples of leather footwear provided by manufacturers, with each leather type codified in terms of visual and tactile sensibilities. Subsequently, a customized software algorithm was crafted to isolate the primary color and adhesiveness as color features from the leather sample images. Analyzing grayscale values of the images and using pixel neighborhood as a base, the associated calculation methods, such as LBP, SCOV, VAR, SAC, etc., were proposed to extract texture features from the images. The derived color and texture feature values were used as the input layer and the sensory vocabulary quantified values as the output layer. Backpropagation neural network training was conducted on 49 leather samples, with five leather samples used for testing, culminating in the verification of neural network training for three types and 17 combinations. The outcome was an optimal method for leather footwear Kansei engineering and neural network training, establishing a design process for leather footwear characteristics assisted by sensory vocabulary and a backpropagation neural network. Additionally, a computer-aided system for selecting leather footwear, based on these impressions, was designed and validated through footwear design. This study utilized symmetry in footwear design. By using the design of a single shoe to represent the imagery of a pair of symmetrical shoes, we verified whether the leather samples recommended by the leather imagery selection query system met the expected system input settings.

1. Introduction

With advancements in technology and elevations in human living standards, shoes have become a significant focal point of fashion [1,2]. Consequently, consumer demand for footwear has become increasingly stringent. Customers not only demand comfort and fit, but also focus on elegance and trendiness, and consider various psychological aspects. Utilizing a variety of materials, a myriad of shoe types can be created, with a significant proportion made from leather, due to its distinctive natural texture and how it feels post-production. Leather that is superior in color and textural quality is considered a symbol of taste and status, and the crafting of good pairs of symmetrical shoes is quite challenging [3]. Crafting a quality pair of shoes is a complex task, and with shifts in the economic environment placing significant pressure on the footwear industry, the ability to upgrade and transform has become vital for its development. Yeh [4] proposed that invoking emotional resonance is crucial for product design success in a highly competitive market. Modern leather manufacturers and designers pay considerable attention to the aesthetics, visual appearance, and tactile sensation of leather apparel, as these factors significantly impact purchasing decisions [5]. Key aspects of design that cannot be overlooked are the shape, color, material, quality, symmetry and function of the products, and the sensory effects of the materials. However, designers often select materials based on their personal experiences and aesthetics, potentially leading to a gap between the product and consumer’ perception of the product.
Symmetry plays an indispensable role in footwear design, profoundly influencing its innovation and diversity. This not only stems from the natural symmetry of human feet, but also from our basic cognition of beauty. Numerous studies [6,7,8] regard symmetry as a characteristic of beautiful objects, positing that beauty is a result of symmetry. Furthermore, Gjoni [9] indicates that consumers pursue design, with perfect symmetry being a key condition for product differentiation. Hence, symmetry is the cornerstone of footwear design, and designers must take it into due consideration and employ it ingeniously during the design process.
Smart manufacturing is the primary trajectory for global industrial development, wherein the manufacturing sector serves as the main battlefield for innovation-driven transformation and upgrades in the digital economy [10]. Shoe design is stepping into the era of intelligence, necessitating the harnessing of novel technologies in the conception of superior footwear designs [11]. There is an expectation of digitizing the selection of shoe material, particularly leather, so as to explore the relationship between affective vocabulary values and inherent color and texture features of leather. By utilizing computer software to assist design with leather footwear characteristics, and coupling objective leather data with consumer sensory perceptions for training in neural networks, designers can scientifically identify suggested leather types through the affective vocabulary values regarding leather footwear. A computer-aided footwear system for leather sample evaluation is undertaken, herein, providing a reference for designers. The goal is to enhance the competitiveness of footwear design amidst a highly competitive environment, meet market demands, and boost corporate competitiveness.

2. Literature Review

2.1. Kansei Engineering and Computer-Aided Kansei Engineering

Kansei Engineering is a product development method originating in Japan, and and developed by Nagamachi in the 1970s, which incorporates subjective perceptions and emotions into the design process [12]. The aim is to connect the emotional responses of consumers with the actual product design attributes. In shoe design, this method is highly applicable, creating products that deeply resonate with consumers, driving preference, and, ultimately, influencing purchasing decisions. A Computer-Aided Kansei Engineering (CAKE) system is an advanced application of Kansei Engineering. It uses computer technology to enhance and simplify the processes involved in the user-centered design method. CAKE systems primarily facilitate the quantitative analysis of subjective data collected during the Kansei Engineering process. The system can efficiently collect, organize, and analyze large data sets related to consumer emotional reactions to different product attributes. In relevant studies, Hsiao et al. [13] employed a Kansei Engineering System (KES) to convert consumer psychological ideas into quantifiable image vocabulary. Using GA’s optimization characteristics, all component combinations are compared with their predicted counterparts to find optimized shaping combinations. Lai [14] proposed a user-oriented design methodology to transform user perception into the design of product elements. Wang et al. [15] proposed a sneaker design process combining Kansei engineering and artificial neural networks. Lin [16] applied Kansei Engineering and artificial neural networks to conduct sound image research, studying the relationship between the sound of electric shavers and consumer emotional evaluation. Huang and Cui [17] extracted features from target face images by means of Principal Component Analysis (PCA), reducing the dimensions of the image. The improved BP neural network was then adopted to classify the feature coordinates of the face image. Finally, the proposed face recognition algorithm was implemented on Matlab and trained with the improved BP neural network. Shieh and Yeh [18] primarily focused their research on running shoes, integrating neural networks and affective engineering. They sought to understand the interplay between shoe form and consumers’ emotional responses. By employing sensory adjectives for running shoes, and using survey methods for data collection, they gathered customer questionnaire data. This data was, subsequently, preprocessed using Principal Component Analysis (PCA) and Partial Least Squares (PLS) to reduce data dimensions, remove redundancy, and clarify mixed or unclear learning data. Finally, they employed PCA-NN and PLS-NN neural networks to establish predictive models.

2.2. Five Sensory Systems

Current studies usually utilize five sensory systems to acquire various forms of information from the external environment, and these sensory systems operate independently and interactively [19]. Schifferstein et al. [20] found that people could understand most product details through visual and tactile senses. Stadtlander and Murdoch [21] noted that about 60% of the identification and description of product characteristics were obtained through the visual sense, and 32% through the tactile sense. Pietra [22] conducted the design, and preliminary testing, of a virtual reality driving simulator capable of communicating tactile and visual information to promote ecologically sustainable driving behaviors. Abbasimoshaei and Kern [23] explored the relationship between hardness and roughness perceptions during the pressing of materials with different surface textures under different forces. The results showed a significant correlation between perceptions of perception and roughness, as well as the influence of applied force on the perception of roughness. Osman et al. [24] presented a method for surface detection using a robot and vibrotactile sensing. Machine learning algorithms were used to classify different surfaces, based on the vibrations detected by an accelerometer.
From a physical perspective, light inherently lacks ‘color’; color is purely a perception created by one’s eyes and brain in response to light frequencies. Different spectra can be perceived as the same color by humans, indicating that color definition is highly subjective. In 1931, the International Commission on Illumination (CIE) proposed the first generation of chromaticity coordinates, establishing color identities based on the corresponding values of mixed red (R), green (G), and blue (B) light, and RGB became a universal color identification method. The YCbCr color space is predominantly utilized in continuous image processing in television computer vision technology, or in digital photography systems. The YCbCr color space, stemming from the CCIR Recommendations 601 (1990) standard, represents color through luminance (Y) and the chroma of blue and red (Cb, Cr). Here, Y is the grayscale value when converting color to grayscale images. The conversion formulae between YCbCr and RGB are as follows:
Y = 0.299 R + 0.587 G + 0.114 B
C b = 0.169 R 0.331 G + 0.500 B = 128
C r = 0.500 R 0.419 G 0.081 B = 128
Pass et al. [25] proposed the Color Coherent Vector (CCV) method to improve the shortcomings of the color histogram. This method, based on color histogram processing, also considers color space information to extract image color features. The same color area is called the “cohesive area,” and the number of a certain color in the cohesive area is defined as the “cohesiveness” of this color. Pixels are classified into two types, cohesive and non-cohesive, to measure pixel cohesiveness. Cohesive pixels are in some continuous area of a certain size, while non-cohesive pixels are not in this area. The color adhesive vector can represent the classification of each color in the image. Research by Hong Liu et al. [26] used the Grab Cut auto segmentation algorithm to segment garment images and extract the image’s foreground. Then, the color coherence vector (CCV) and the dominant color method were adopted to extract the color features to conduct garment image retrieval. Reddy et al. [27] proposed an algorithm which incorporates the advantages of various other algorithms to improve the accuracy and performance of retrieval. The accuracy of color histogram-based matching can be increased by using the Color Coherence Vector (CCV) for successive refinement.
To illustrate, consider a 6 × 6 grayscale image, with each pixel’s grayscale value as shown in Figure 1. The image can be quantified into three color components. Each quantified color component is called a bin. For instance, bin1 contains grayscale values from 10 to 19, bin2 contains grayscale values from 20 to 29, and bin3 contains grayscale values from 30 to 39. After quantification, we can obtain the results as shown in Figure 2.
Using the “Connected Component Labeling” method, we can identify connected regions, as shown in Figure 3. Each component is labeled with an English letter (A, B, C, D). Different letters are used to label regions with the same color that are present in different contiguous areas.
Create a table to record the color corresponding to each mark and the quantity of this color, as shown in Table 1.
We now set an adhesive threshold value T = 4 (the size of the adhesive threshold value can be determined by oneself). If the number of connected components of pixels exceeds the threshold, T, these pixels are adhesive. If it is less than the threshold, these pixels are non-adhesive. Alpha (α) represents the number of adhesive pixels, and beta (β) represents the number of non-adhesive pixels. In Table 2, the number of pixels marked as A, B, and E all exceed the threshold, T, hence they are marked as adhesive pixels; the number of pixels marked as C and D are less than the threshold T, hence they are marked as non-adhesive pixels. In the end, the color adhesion vector is obtained. The color adhesion vector of this image can be expressed as ((17, 3), (15, 0), (0, 3)).
The HSI color space is a color determination method grounded in three fundamental properties of color: hue, saturation, and intensity. Stemming from our visual system, the HSI color space conforms more closely to human visual characteristics, compared to the RGB color space. Research by Smith and Chang [28] pointed out that the hue in the HSI color space is composed of the three primary colors of red, green, and blue, distributed every 120°. The hue component of the image is quantized every 20°. Quantifying the hue component into 18 (360/20) bins is sufficient to distinguish various colors in the hue component. The saturation component only needs to be quantized into three bins to provide enough perceptual tolerance. Therefore, it can be combined into 18 × 3 = 54 kinds of quantized color, which are encoded into numbers 1–54. The numbers 1-54 represent these 54 different quantized colors. By using the adhesive vector method to judge a pixel’s eight-neighborhood and defining a high threshold Tα, if the number of pixels with the same color as the center pixel exceeds Tα, the adhesiveness (α) of this color is incremented by 1. By scanning all the (M-2) × (N-2) pixels in the entire image, we can obtain all the high-adhesive colors in the image. The main color of this image, and its corresponding adhesion, can be represented by the numbers 1-54, which can be used to represent the color characteristics of the image. Thoriq et al. [29] identified the level of ripeness of bananas using the image of a plantain fruit in its skin that was still intact. The image of the plantain fruit was preprocessed using HSI (Hue Saturation Intensity) color space transformation feature extraction.
Texture refers to the grooves on the surface of an object that appear uneven, with different characteristics, such as direction and roughness, which are the qualities of the surface of the substance. Mathematically, texture is the visual representation of the correlation of gray level changes or color space in adjacent pixels in the image, such as shape, stripes, color blocks, etc.
Local Binary Pattern (LBP) is a basic invariant of grayscale structure statistics. LBP is a powerful feature algorithm in texture classification problems. It uses a 3 × 3 computational dimension to calculate the difference between the central pixel and the surrounding eight-neighborhood pixels. The mask equation can judge the difference value, get the mask of the difference value and correspondingly multiply it with the weight mask, and then add it all up. The LBP operation formula is as follows below [30]. The neighborhood pixel value is compared with the central pixel value;1 if larger and 0 if smaller. The schematic diagram of the LBP operation is shown in Figure 4.
d i = { 1 i f p i p c e n t e r 0 i f p i < p c e n t e r
Pi: Calculate the 3 × 3 neighborhood pixel values of the mask.
Pcenter: Calculate the center pixel value of the 3 × 3 mask.
L B P P = i 0 p 1 d i 2 i
2i: Indicates the weight value of each position of the neighborhood pixel.
di: The 0 and 1 values obtained by comparing the neighboring pixels with the central pixel.
Ojala et al. [31] achieved excellent performance in their research by using LBP analysis on images. Additionally, they introduced three other calculation methods to identify the relationship between image pixels and grayscale texture: SCOV, VAR, and SAC. Similar to LBP, these three methods also operate on a 3 × 3 basis of pixels, as shown in Figure 5, and calculate the correlations between the eight neighboring pixels and the center pixel. A brief description of each is provided below:
(1)
SCOV measures the correlation between textures, and is used in unnormalized local grayscale variables.
S C O V = 1 4 i 4 ( x i μ ) ( x i μ )
μ represents the local average, that is ( x 1 + x 2 + x 3 + x 4 + x 1 + x 2 + x 3 + x 4 ) / 8
(2)
VAR measures the variation of gray scale values.
V A R = 1 8 i 4 ( x i 2 + x i 2 ) μ 2
(3)
SAC is the correlation of 8 values in the measurement area, and the gray scale variable after normalization is used, and the SAC value will be limited between −1 and 1.
S A C = S C O V V A R
In undertaking relevant research on color and texture, Hayati [32] aimed to classify types of roses by applying the k-nearest neighbor (K-NN) algorithm, based on extraction color characteristics of hue, saturation, value (HSV) and local binary pattern (LBP) texture. Rusia and Singh [33] proposed a practical approach combining Local Binary Patterns (LBP) and convolutional neural network-based transfer learning models to extract low-level and high-level features. Three color spaces (i.e., RGB, HSV, and YCbCr) were analyzed to understand the impact of the color distribution on real and spoofed faces for the NUAA benchmark dataset. Vangah, et al. [34] combined the statistical Local Binary Pattern (LBP), having Hue Saturation Value (HSV) and Red Green Blue (RGB) color spaces fusion, with frequency (Gabor filter and Discrete Cosine Transform (DCT)) descriptors for the extraction of visual textural and colorimetric features from direct view images of rocks.
This study converts the RGB value of the sample image into the HSI value, uses hue (H) and saturation (S) components for color analysis, and uses color adhesion to calculate the top six colors with the highest adhesion as the main color. The corresponding adhesion combination is used as the color feature of the sample image. The decomposed intensity component (I) is changed to the calculation method of the Y value in the YCbCr color space to get the image grayscale value, and methods such as LBP, SCOV, VAR, SAC, etc. are used to analyze the correlation between image grayscale values for image texture feature extraction.

2.3. The Backpropagation Neural Network

The backpropagation neural network (BPN) generally uses the error backpropagation algorithm (EBP) for learning. It learns the internal correspondence between the input value and the target output value of the learning training sample. Through the hidden layer design, it learns the internal correspondence between the input value and the target output value of the learning training sample. By comparing its output value with the original target value of the training sample, the network error is obtained, and the concept of the Gradient Steepest Descent Method is used to minimize the error function and make the network use the concept of the steepest descent to modify the network weighting value (weight and bias weight). The basic structure is shown in Figure 6.
1.
Set the parameters of the backward transfer neural network. According to the type of problem, the network designer sets the number of neurons, the number of hidden layers, learning considerations and error tolerances, etc. to determine the network structure.
2.
Set the initial weight matrix: set the weight matrix W_xh and W_hy between 0 and 1, and the partial weight vectors θ_h and θ_y with uniformly distributed random numbers.
3.
Set the input and output of the network: input the training sample vector X and the target output vector T. The input value X can be any real number. In BPN, the log-sigmoid transfer function is often used as the nonlinear transformation function of the neuron, and the calculation formula is as follows:
f ( X ) = 1 1 + e n
The value range of the inference output value of the network is between [0, 1], and the value range of the target output value T also falls between [0, 1].
4.
Calculate the output vector Y
(a)
Hidden layer output vector H
n e t h = W _ x h i h X i θ _ h h
H h = f ( n e t h ) = 1 1 + exp n e t h
(b)
Deduce the output vector Y
ne t j = h W _ x h y h j H h θ _ y i
Y j = f ( n e t j ) = 1 1 + exp n e t j
5.
Calculate the gap δ:
(a)
Output layer gap δ
δ j = Y j ( 1 Y j ) ( T j Y j )
(b)
Hidden layer gap δ
δ j = H h ( 1 H h ) j W _ x h y h j δ j
6.
Through the concept of the steepest slope, calculate the weight matrix correction value ∆W and the partial weight vector correction value ∆θ:
(a)
Output layer weight matrix correction amount ∆W_hy and partial weight vector correction amount ∆θ_y
Δ W _ h y h j = η δ j H h
Δ θ _ y j = η δ j
(b)
Hidden layer weight matrix correction ∆W_xh and partial weight vector correction ∆θ_h
Δ W _ x h i h = η δ h X i
Δ θ _ h h = η δ h
7.
Update the weight matrix W, and the bias weight matrix vector θ:
(a)
Output layer weight matrix W_hy, and partial weight vector correction value ∆θ_y.
Δ W _ h y h j = η δ j H h
θ _ y j = θ _ y j + Δ θ _ y j
(b)
Hidden layer weight matrix W_xh and partial weight vector θ_y
W _ x h i h = W _ x h i h + Δ W _ x h i h
θ _ h h = θ _ h h + Δ θ _ h h
8.
In order to make the error converge (reduce to zero), repeat steps 3-7, or execute the preset number of learning cycles;
9.
To verify the learning results, enter the root mean square error:
R M S E = P M J N ( T j p Y j p ) 2 M N
In which:
T j p is the target output value of the jth output unit of the Pth sample
Y j p is the inferred output value of the jth output unit of the Pth sample
M is the number of samples
N is the number of processing units in the output layer
Once the above results have completed the convergence of the BPN Network, i.e., the learning has been achieved, then the network’s recall can be performed, as follows:
1.
Set the parameters of the BPN Network;
2.
Read in the weight matrix W_xh and W_hy, and the partial weight vector ∆θ_h and ∆θ_y;
3.
Read in the input vector X of the test example;
4.
Calculate the inferred output vector Y:
(a)
Hidden layer output vector H
n e t h = i W _ x h i h X i θ _ h h
H h = f ( n e t h ) = 1 1 + exp n e t h
(b)
Inference output vector Y
n e t j = h W _ h y h j H h θ _ y j
Y j = f ( n e t j ) = 1 1 + exp n e t j
When the network completes the above learning process, it can use the recall steps to identify the input sample.

3. Materials and Methods

3.1. Research Process

Neural networks can provide model construction for non-linear and complex research topics. This, together with the fact that consumer aesthetic experience is complex and changeable, means relying solely on Kansei engineering to interpret answers presents multiple possibilities but is still insufficient. However, combined with the use of neural networks, better research results can be achieved. The purpose of this study was to establish an auxiliary design process based on Kansei engineering, combined with neural network training, to assist shoe designers in analyzing the characteristics of shoe leather. The study first used 54 shoe leather samples, provided by the manufacturers, measured the sensuality of the visual touch, and explored the correlation between leather color, texture and lexical imagery.
The first stage was the sensory measurement part: First, adjectives suitable for describing leather were collected, and, through discussion using the focus group method, pairs of visual touch adjectives were obtained for use in the visual touch sensibility assessment questionnaire. After the questionnaire survey, the score was rated according to the score of each sensual vocabulary pair. After the experiment, the sensual vocabulary score was statistically calculated, and the average value of the sensual vocabulary for a leather sample could be obtained.
The second stage was the extraction and classification of sample features. Pictures of the leather samples provided by the manufacturers were taken and saved, an image processing program was written, and the color feature value and texture feature value of the leather picture were extracted, to obtain the leather sample database. Since there were only 54 pieces of leather samples, in order to enable the neural network to have sufficient training samples to get better verification results, 49 samples were used for neural network training, and 5 samples were reserved for neural network verification. The color values and texture feature values of the 54 leather samples were divided into 5 groups through the K-Mean method of the statistical software SPSS, and the center samples of these 5 groups were used as the verification samples of the neural network. The remaining 49 samples were used for neural network training.
The third stage involved the construction and training of the Back Propagation Neural Network. The color and texture feature values of the training samples were used as the input layer for the neural network, and the quantified values obtained from the Kansei Engineering questionnaire were used as the output values for neural network learning and training. The feature values of the validation samples were then used as the input layer, and the degree of error between the predicted values generated by the trained neural network and the quantified values of the Kansei questionnaire were compared to assess the convergence effect. A user interface was designed for a computer-aided leather footwear evaluation query system, and the output samples were used to design footwear. Since the left and right feet of footwear have symmetry, the design verification used one shoe sample for evaluation, verified the Kansei image numerical values of the footwear, and compared these with the original input Kansei numerical values.
This study was conducted in three stages. Each task is briefly described, as shown in Figure 7.

3.2. Leather Sample Collection

In this study, a catalog of leather samples provided by a footwear company was employed, and experts in footwear design were invited to make recommendations. After multiple assessments and screenings, we finally selected 54 samples of various types of leather textures that were most representative. For each texture of leather, experts in footwear design recommended and selected colors frequently used and representative in shoe design. The leather sample book provided by the shoe company for this study is shown in Figure 8. The leather samples were numbered in the order of the leather sample book. Such random numbering prevented the samples from being close during the sensibility assessment questionnaire survey and ensured that confusion during the questionnaire test was less. Referring to the literature, the types of leather were divided into: bead surface leather, waxed leather, embossed leather, and throw-flower oil leather.
To obtain information related to the sensory imagery of shoe leather samples, a total of 80 suitable adjectives for describing leather were collected from shoe and clothing fashion magazines, leather-related books, and the Internet. A focus group discussion on the initially collected 80 sensory words related to shoe leather imagery, together with the shoe leather samples, was conducted, and 10 groups of visual and tactile sensory vocabulary pairs were selected through expert discussion. The design of the content of the sensibility assessment questionnaire is shown in Table 3.

3.3. The Questionnaire Subjects

The questionnaire subjects were 35 design professionals, who had keen observations and feelings, and could make suitable sensory evaluations of color and texture. A further 45 random survey personnel, to reflect consumers’ intuitive feelings and respond to some consumer perceptions, were also subjects. The testers observed and touched the leather samples provided by the manufacturers, and scored sensory impression values of the leather samples using the 10 pairs of sensual vocabulary words in the visual and tactile questionnaire. The average score of the 10 pairs of visual and tactile sensual vocabulary words for each sample was obtained, and its average value divided by 7 (in the imagery sense scoring experiment, while the sensory score was 7 levels), to normalize its value, which was between 0 and 1, so as to perform neural network training.

3.4. Leather Sample Collection

The shooting equipment was the Canon EOS 650D, and shooting was conducted under abundant natural light conditions. The set-up platform was at a 20° angle to the desktop, with the camera lens at an appropriate distance and parallel to the platform. The camera settings were set to a focal length of 135, an aperture of 5.6, a shutter speed of 1/30, an ISO of 800, and a photo size of 5184 × 3456 pixels. After shooting, the photos were cropped using Adobe Photoshop software. The central pixels of 160 × 160 in the photo were used as the images for the image feature extraction program, as shown in Figure 9.
This section is divided by subheadings. It should provide a concise and precise description of the experimental results and their interpretation, as well as the experimental conclusions that can be drawn.

3.5. Leather Sample Feature Extraction

The RGB values of the 160 × 160 pixels image of the leather sample were converted to HSI values, and then the hue (H), saturation (S), and intensity (I) components were decomposed and analyzed for color and texture, separately, to extract the color features and texture features of the leather samples, as shown in Figure 10.

3.6. Leather Sample Color Feature Extraction

The hue component was quantized into 18 bins, and the saturation component was quantized into 3 bins, which could be combined into 18 × 3 = 54 quantized colors, and were encoded into numbers 1~54. The numbers 1–54 represented the 54 different quantized colors. The calculation method of the color adhesion vector was used and the pixels of the entire image scanned, so that the six colors with the highest color adhesion in the image could be extracted as the main colors, and, at the same time, the corresponding color adhesion could be calculated. The values of the main colors ranged from 1 to 54, and the highest color adhesion did not exceed 25,600 (160 × 160). A color feature vector of 6 (main color) + 6 (corresponding color adhesion of the main color) was obtained. Matlab software was used to write a color analysis program to obtain the color feature vector data of the sample.

3.7. Leather Sample Texture Feature Extraction

The RGB value of the 160 × 160 pixels image of the leather sample was converted to HSI value, and the intensity component (I) separated to convert the color image into a grayscale image. Since I = (R + G + B)/3, to avoid converting the three colors, R, G, B, (255,0,0), (0,255,0), (0,0,255), respectively, into the same I value when converting to a grayscale image, the calculation method of the intensity component was changed to the intensity calculation method of the YCbCr color space, as in Formula (1). Methods such as LBP, SCOV, VAR, SAC, etc., were used to analyze the correlation between the grayscale values of the images to extract image texture features, as in Formulae (5)–(8).
The research divided the 160 × 160 pixels grayscale image of the leather sample into 25 32 × 32 pixels blocks, and analyzed the texture of the small area with 4 methods, such as LBP, SCOV, VAR, SAC, etc. In the 32 × 32 small area, a texture mask operation with 3 × 3 pixel area was performed, and 900 quantized values obtained, as shown in Figure 10. The average of the total average was taken to obtain 25 values representing the textural features of the samples.

3.8. Cluster Analysis

SPSS software was used to analyze the leather data. The leather image data was analyzed through methods such as LBP, SCOV, VAR, and SAC, to obtain 4 different texture data clusters. Each of the 4 texture features were paired with the image color features. The data obtained from 54 leather samples were divided into 5 groups by the K-Means method, and the representative samples of each group obtained, which was convenient in order to find the center sample of each group.
In order to prove the accuracy of the trained neural network, the color adhesion data obtained from the leather sample images in this study were respectively paired with 4 different textural analysis data clusters. SPSS software was used for K-Means cluster analysis. After being divided into 5 groups, 5 representative samples were taken as the center sample for verification. The results of the grouping after the calculation was completed, the center samples obtained from the color adhesion and LBP were samples 33, 2, 20, 37, and 32, the center samples obtained from color adhesion and SCOV were samples 33, 2, 20, 37, and 32, the center samples obtained from color adhesion and VAR were samples 29, 2, 20, 37, and 51, and the center samples obtained from color adhesion and SAC were samples 33, 2, 20, 37, and 51.
Among them, sample 2 only had one cluster, which meant that there was only one sample similar to it, so it could not be used as a representative verification sample and was classified as a training sample. After comparison, it was decided to use samples 20, 32, 33, 37, and 51 as the verification samples for the neural network, as shown in Table 4. The remaining 49 samples were used for neural network training.

3.9. The BPN Network

To conduct neural network learning, the BPN network takes the color feature vectors of leather sample images and pairs them with four different texture analysis results as the input layer, and, as the output layer, takes the sensory word pairs from the 10 groups of visual–tactile questionnaires.
In this study, MATLAB software was used to write programs for BPN network training and validation. In the construction of the BPN network, the number of neurons in the input layer, the number of neurons in the hidden layer, the number of neurons in the output layer, the number of training times, and the source of verification samples were utilized. The number of neurons in the hidden layer of the backpropagation neural network affects the network’s ability to describe problems. The number of neurons in the hidden layer in this study was set as follows: the number of hidden layer units equaled the number of input layer neurons plus the number of output layer neurons.
This study used the measurements obtained from the visual–tactile sensory language questionnaire as the output layer of the backpropagation neural network, and the color features and texture features of the leather sample images as the input layer. The 49 training samples were used as the input layer, allowing the backpropagation neural network to receive training, and, then, the five selected center samples were used as verification samples to verify the trained neural network. The calculation formula for the error rate in the study is:
E r r o r   R a t e = ( T S ) / S
-
T represents the Test Result
-
S represents the Survey Result.
The verification result was the visual sensory language pair value obtained by the BPN network C5, and the questionnaire result was the data obtained from the questionnaire and normalized, so that its value was between 0–1.

4. Results

4.1. Training of BPN-A1~A4 with Color and Texture Features as Input Layer

The input layer consisted of 49 leather sample texture quantization results, which yielded 12 color feature vectors and 25 texture feature vectors, along with the sample numbers, totaling 39 neurons. The output layer consisted of 10 neurons representing the equalized values of the 10 visual–tactile sensory linguistic evaluation pairs, which were normalized between 0 and 1. There was one hidden layer with 49 neurons, which was the sum of the neurons in the input and output layers. The BPN neural network structure is depicted in Figure 11.

4.2. A1 Color Feature with LBP Texture Feature

The color features obtained from LBP analysis and the texture features were used as the input layer. After 10,000 iterations of learning, the mean squared error (MSE) converged to 0.000997, as shown in Figure 12 and Figure 13. The texture quantization feature vectors of the previously selected 5 validation samples were used as the input layer to validate the trained BPN network. The results are shown in Table 5.
The validation results were the visual–tactile sensory linguistic evaluation values obtained from the BPN network C5, and the questionnaire results were the data obtained from the questionnaire and normalized between 0 and 1. The verification results showed that the average error rate for sample 20 was 18.681%, for sample 32 it was 11.34%, for sample 33 it was 8.53%, for sample 37 it was 23.34%, and for sample 51 it was 20.38%, The overall average error rate was 16.45%.

4.3. A2 Color Feature with SCOV Texture Feature

The color features obtained from the SCOV analysis and the texture features were used as the input layer. After 10,000 iterations of learning, the MSE converged to 0.00373, as shown in Figure 14 and Figure 15. The texture quantization feature vectors of the previously selected 5 validation samples were used as the input layer to validate the trained BPN network. The results are shown in Table 6.
The validation results were the visual–tactile sensory linguistic evaluation values obtained from the BPN network C6, and the questionnaire results were the data obtained from the questionnaire and normalized between 0 and 1. The verification results showed that the average error rate for sample 20 was 16.15%, for sample 32 it was 9.67%, for sample 33 it was 13.99%, for sample 37 it was 17.6%, and for sample 51 it was 27.42%, The overall average error rate was 16.79%.

4.4. A3 Color Feature with VAR Texture Feature

The color features obtained from the VAR analysis and the texture features were used as the input layer. After 10,000 iterations of learning, the MSE converged to 0.000999, as shown in Figure 16 and Figure 17. The texture quantization feature vectors of the previously selected 5 validation samples were used as the input layer to validate the trained backpropagation neural network. The results are shown in Table 7.
The validation results were the visual–tactile sensory linguistic evaluation values obtained from the BPN network A3, and the questionnaire results were the data obtained from the questionnaire and normalized between 0 and 1. The verification results showed that the average error rate for sample 20 was 17.9%, for sample 32 it was 11.46%, for sample 33 it was 12.22%, for sample 37 it was 9.44%, and for sample 51 it was 21.45%, The overall average error rate was 14.50%.

4.5. A4 Color Feature with SAC Texture Feature

The color features obtained from SAC analysis and the texture features were used as the input layer. After 10,000 iterations of learning, the MSE converged to 0.000990, as shown in Figure 18 and Figure 19. The texture quantization feature vectors of the previously selected 5 validation samples were used as the input layer to validate the trained BPN network. The results are shown in Table 8.
The validation results were the visual–tactile sensory linguistic evaluation values obtained from the BPN network A3, and the questionnaire results were the data obtained from the questionnaire and normalized between 0 and 1. The verification results showed that the average error rate for sample 20 was 14.6%, for sample 32 it was 21.01%, for sample 33 it was 12.9%, for sample 37 it was 26.28%, and for sample 51 it was 19.51%, The overall average error rate was 18.86%.
In summary, among the neural network validations where the color feature of the leather sample was paired with four different texture features as the input layer, and the visual–tactile sensory evaluation value was the output layer, the four texture features had lower error rates. Compared with visual sensory evaluation values, using visual–tactile evaluation as the output layer was helpful to obtain verification results. Among them, the result of using the color feature of the leather sample paired with the VAR texture feature as the input layer was better, with an average error rate of 14.50%.
The error rates of the visual affective vocabulary values obtained from the four types of BPN networks were multiplied by 100 for convenience in variance analysis. As shown in Table 9, there were no significant differences in the error rates of the four color and texture processing methods.
Thus, we conducted a comparison of the mean and standard deviations, as illustrated in Table 10. The results indicated that pairing the color features of the leather samples with the VAR texture features as the input layer produced relatively superior results, with an average error of 14.50 and a standard deviation of 12.026.

5. Discussion

This study utilized the Matlab software program to quantify the color and texture features of leather footwear samples, conducting BPN network training with visual–tactile sensory language pair values. Various verification results demonstrated that a better backpropagation neural network model could be constructed by using color features paired with VAR texture features as the input layer and visual–tactile sensory quantized values as the output layer. Hence, designers and manufacturers can directly use the developed program to acquire color features and VAR texture features of newly incoming leather samples, leading to data analysis prediction via the neural network model constructed, having color features paired with VAR texture features as the input layer, and visual–tactile sensory quantized values as the output layer.

5.1. Research Applications

Applications of the BPN Network are varied. The average value of visual–-tactile sensory language for leather samples was calculated using the least squares method (correlation analysis), and the scoring of 10 visual–tactile sensory language pairs adjusted to obtain the suggested leather samples. The system is designed to input scores of visual–tactile sensory language pairs, to query corresponding color features and texture features according to the results of trained sensory language value testing, and to find the leather sample that closely matches, outputting the data of the two closest samples. A computer-aided leather footwear evaluation query design system was written using Matlab software, as shown in Figure 20. Designers can use this application to find the leather sample closest to what is recommended and apply it to footwear design. The system interface design instructions ar as follows: enter scores of 10 visual–tactile sensory language pairs on the left, between 1–7, or click to move the slider to adjust the value, click to query, and, then, two leather samples closest to the sensory score pop up. These results provide a reference basis for designers to compare and select for use.
For example, according to the scores of 10 sets of visual–tactile sensory language pairs, the settings were: “classic–avant-garde” 2 points, “warm–breathable” 2 points, “casual–sporty” 3 points, “delicate–rough” 4 points, “steady–assertive” 5 points, “fashionable–retro” 4 points, “smooth–rough” 4 points, “solid–fragile” 2 points, “heavy–light” 3 points, and “soft–stiff” 5 points. The application result suggests that the recommended leather samples are No.45 and No.1, as shown in Figure 21.
In practice, the computer-aided leather footwear evaluation query design system can be provided for use by footwear designers or shoe manufacturers. It helps designers to choose leather more objectively and scientifically. It can also be provided to the design marketing department for market decision making, where the auxiliary design process of this study can be referred to for more rigorous judgment. At the same time, the number of training samples of the neural network can continue to expand the database with the development of designers and manufacturers, giving designers suggestions on leather styles, based on visual–tactile sensory evaluation values.

5.2. Computer-Aided Leather Footwear Design System

Symmetry is a critical factor in footwear design, profoundly impacting aesthetics and functionality. The symmetry in shoe design stems from our body’s inherent bilateral symmetry, with human feet typically mirroring each other in shape and size. This naturally imposes a symmetry principle in shoe design. A typical symmetrical footwear design starts with one shoe as a design template, specifically focusing on the lateral side and the sole (Figure 22), before applying symmetry principles to create the other shoe, thereby simplifying the design and manufacturing process.
This study employed the output of a leather sample evaluation inquiry system to determine the type of leather to be used in shoe design. The design validation used only one shoe sample for assessment. To facilitate consumer comprehension, and perception, of the footwear imagery, design illustrations adopted a 30° side oblique view, a common shoe product display perspective that provides the simplest and most intuitive sense of the shoe’s imagery. All shoe shape styles were uniform. The system-generated leather design effect diagrams, such as in Figure 23, show Shoe A designed using Sample 45 and Shoe B using Sample 1. Thirty ordinary consumers were asked to rate the perceived imagery of the two shoes by filling out an affective vocabulary scale. These ratings were then compared with the affective vocabulary values set by the inquiry system to identify any discrepancies; thus, validating the feasibility of the leather sample evaluation inquiry system. The single-sample t-test results comparing shoe design imagery and leather sample imagery can be found in Table 11.
From Table 10, it can be seen that the p-value of the casual–sporty image was greater than 0.05, indicating a significant difference, while the other 9 items had p-values less than 0.05, which indicated no significant difference. This indicated that the casual-sporty image was different from the original set leather image, which might have been due to the influence of the shoe design style. The remaining 9 items were close to the original set image, indicating that the computer-aided leather footwear evaluation query smart design system is feasible. These remaining 9 items were the following: “classic–avant-garde”, “warm–breathable”, “delicate–rough”, “steady–assertive”, “fashionable–retro”, “smooth–rough”, “solid–fragile”, “heavy–light”, “soft–stiff”.

6. Conclusions

This study used a program written in Matlab to quantify the color features and texture features of leather sample images, and trained the BPN network with visual–sensory vocabulary pair values and visual–tactile sensory vocabulary pair values, leading to the development of a leather sample evaluation inquiry system. The contributions are described below.
Utilizing Matlab, we digitized the characteristics of the leather samples, establishing a method for parameter conversion of leather color and texture features. In this research, the leather sample data was comprehensively documented for subsequent investigations. This facilitates further in-depth analysis and discussions concerning the leather; for instance, applying the leather samples in fields such as furniture design, luggage design, and decorative design.
Through BPN network training with different input and output layers, the error rate was compared to analyze and find a better neural network training model for designers and manufacturers to use.
Based on the experimental data of image feeling scores by testers with a design background and ordinary consumers, a feature analysis of leather footwear samples trained by sensory vocabulary pair values and neural networks was conducted. This assists designers in obtaining objective data on the sensory vocabulary pairs of leather during the process of designing footwear.
When there are new styles of leather, the trained neural network can be used to verify the sensory vocabulary pair values. As designers and manufacturers continuously expand and increase the database, designers can design with more objective conditions and information.
Designers can use the computer-aided leather footwear evaluation query smart design system to do the following: adjust sensory vocabulary pair values, find matching leather sample data, refer to much leather data, select leather samples that meet the needs of designers, and further perfect the design process of shoe styles.

Author Contributions

Data curation, D.-D.X.; Formal analysis, D.-D.X. and C.-S.W.; Investigation, D.-D.X. and C.-S.W.; Visualization and resources, D.-D.X.; Project administration, C.-F.W.; Supervision, C.-F.W.; Writing—original draft, D.-D.X.; Writing—review & editing, C.-S.W., C.-F.W. and D.-D.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to “Not applicable” for studies not involving humans or animals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

Thanks to the academic editors and anonymous reviewers for their review and advice on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shoukat, S.; Rabby, S. A Total Journey of Footwear With Material Analysis. J. Sci. Eng. Res. 2017, 4, 187–191. [Google Scholar]
  2. Solomon, B.B. Leather cutting waste minimization Techniques in Ethiopian footwear industry: Case study ELICO-Universal Leather Products Industry. 2021; Preprint 11 March 2021. [Google Scholar] [CrossRef]
  3. Jatmiko, H.A.; Nugroho, D.S. Implementing Kansei Engineering and Quality Function Deployment Method in Designing Shoes: Case Study at Rejowinangun Original Leather. Log. J. Ranc. Bangun Dan Teknol. 2022, 22, 70–80. [Google Scholar] [CrossRef]
  4. Yeh, Y.E. Prediction of optimized color design for sports shoes using an artificial neural network and genetic algorithm. Appl. Sci. 2020, 10, 1560. [Google Scholar] [CrossRef] [Green Version]
  5. Moganam, P.K.; Sathia Seelan, D.A. Deep learning and machine learning neural network approaches for multi class leather texture defect classification and segmentation. J. Leather Sci. Eng. 2022, 4, 7. [Google Scholar] [CrossRef]
  6. Demuthova, S.; Minarova, D. The Evolutionary Principles of the Attractiveness of Symmetry and Their Possible Sustainability in the Context of Research Ambiguities. BRAIN Broad Res. Artif. Intell. Neurosci. 2023, 14, 515–534. [Google Scholar] [CrossRef]
  7. Azemati, H.; Jam, F.; Ghorbani, M.; Dehmer, M.; Ebrahimpour, R.; Ghanbaran, A.; Emmert-Streib, F. The role of symmetry in the aesthetics of residential building facades using cognitive science methods. Symmetry 2020, 12, 1438. [Google Scholar] [CrossRef]
  8. Zaidel, D.W.; Hessamian, M. Asymmetry and symmetry in the beauty of human faces. Symmetry 2010, 2, 136–149. [Google Scholar] [CrossRef] [Green Version]
  9. Gjoni, A. Design: Aesthetics as a Promoter of Selling Products in Kosovo. Open J. Bus. Manag. 2021, 9, 1104–1120. [Google Scholar] [CrossRef]
  10. Liu, L.; Zhao, P. Manufacturing Service Innovation and Foreign Trade Upgrade Model Based on Internet of Things and Industry 4.0. Math. Probl. Eng. 2022, 2022, 4148713. [Google Scholar] [CrossRef]
  11. Kutnjak-Mravlinčić, S.; Akalović, J.; Bischof, S. Merging footwear design and functionality. Autex Res. J. 2020, 20, 372–381. [Google Scholar] [CrossRef]
  12. Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 1995, 15, 3–11. [Google Scholar] [CrossRef]
  13. Hsiao, S.W.; Chiu, F.Y.; Lu, S.H. Product-form design model based on genetic algorithms. Int. J. Ind. Ergon. 2010, 40, 237–246. [Google Scholar] [CrossRef]
  14. Lai, H.H.; Lin, Y.C.; Yeh, C.H.; Wei, C.H. User-oriented design for the optimal combination on product design. Int. J. Prod. Econ. 2006, 100, 253–267. [Google Scholar] [CrossRef]
  15. Wang, C.C.; Yang, C.H.; Wang, C.S.; Chang, T.R.; Yang, K.J. Feature recognition and shape design in sneakers. Comput. Ind. Eng. 2016, 102, 408–422. [Google Scholar] [CrossRef]
  16. Lin, Z.H.; Woo, J.C.; Luo, F.; Chen, Y.T. Research on sound imagery of electric shavers based on kansei engineering and multiple artificial neural networks. Appl. Sci. 2022, 12, 10329. [Google Scholar] [CrossRef]
  17. Huang, Q.; Cui, L. Design and Application of Face Recognition Algorithm Based on Improved Backpropagation Neural Network. Rev. D’intelligence Artif. 2019, 33, 25–32. [Google Scholar] [CrossRef]
  18. Shieh, M.D.; Yeh, Y.E. Developing a design support system for the exterior form of running shoes using partial least squares and neural networks. Comput. Ind. Eng. 2013, 65, 704–718. [Google Scholar] [CrossRef]
  19. Schultz, L.M.; Petersik, J.T. Visual-haptic relations in a two-dimensional size- matching task. Percept. Mot. Ski. 1994, 78, 395–402. [Google Scholar] [CrossRef]
  20. Schifferstein, H.; Desmet, P. The effects of sensory impairments on product experience and personal well-being. Ergonomics 2007, 50, 2026–2048. [Google Scholar] [CrossRef] [PubMed]
  21. Stadtlander, L.; Murdoch, L. Frequency of occurrence and rankings for touch-related adjectives. Behav. Res. Methods Instrum. Comput. 2000, 32, 579–587. [Google Scholar] [CrossRef] [Green Version]
  22. Pietra, A.; Vazquez Rull, M.; Etzi, R.; Gallace, A.; Scurati, G.W.; Ferrise, F.; Bordegoni, M. Promoting eco-driving behavior through multisensory stimulation: A preliminary study on the use of visual and haptic feedback in a virtual reality driving simulator. Virtual Real. 2021, 25, 945–959. [Google Scholar] [CrossRef]
  23. Abbasimoshaei, A.; Kern, T.A. Exploring Hard and Soft Texture Perception by Force-Haptic Discrimination. In Proceedings of the IEEE World Haptics Conference, Delft, Netherlands, 10–13 July 2023; IEEE: New York, NY, USA. [Google Scholar]
  24. Osman AM, H.; Abbasimoshaei, A.; Youssef, F.; Kern, T.A. Surface Detection by an Artificial Finger Using Vibrotactile Recognition. In Proceedings of the IEEE World Haptics Conference, Delft, Netherlands, 10–13 July 2023; IEEE: New York, NY, USA. [Google Scholar]
  25. Pass, G.; Zabih, R.; Miller, J. Comparing images using color coherence vectors. In Proceedings of the Fourth ACM International Conference on Multimedia, Tokyo, Japan, 13–16 December 2022; pp. 65–73. [Google Scholar]
  26. Liu, H.; Wang, Y.; Chen, D.; Lv, J.; Alshalabi, R. Garment Image Retrieval based on Grab Cut Auto Segmentation and Dominate Color Method. Appl. Math. Nonlinear Sci. 2022, 8, 573–584. [Google Scholar] [CrossRef]
  27. Reddy, M.A.; Kulkarni, L.; Narayana, M. Content Based Image Retrieval using Color and Shape Features. Int. J. Adv. Res. Comput. Commun. Eng. 2017, 6, 386–392. [Google Scholar] [CrossRef]
  28. Smith, J.R.; Chang, S.F. Single color extraction and image query. In Proceedings of the Proceedings, International Conference on Image Processing, Washington, DC, USA, 23–26 October 1995; Volume 3, pp. 528–531. [Google Scholar]
  29. Thoriq, A.I.; Zuhri, M.H.; Purwanto, P.; Pujiono, P.; Santoso, H.A. Classification of banana maturity levels based on skin image with HSI color space transformation features using the K-NN Method. J. Dev. Res. 2022, 6, 11–15. [Google Scholar] [CrossRef]
  30. Park, J.H.; Won, S. Stability analysis for neutral delay-differential systems. J. Frankl. Inst. 2000, 337, 1–9. [Google Scholar] [CrossRef]
  31. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  32. Hayati, N. KLASIFIKASI JENIS BUNGA MAWAR MENGGUNAKAN ALGORITMA K-NEAREST NEIGHBOUR. J. Inform. Dan Ris. 2023, 1, 31–37. [Google Scholar] [CrossRef]
  33. Rusia, M.K.; Singh, D.K. A Color-Texture-Based Deep Neural Network Technique to Detect Face Spoofing Attacks. Cybern. Inf. Technol. 2022, 22, 127–145. [Google Scholar] [CrossRef]
  34. Vangah, J.W.; Ouattara, S.; Ouattara, G.; Clement, A. Global and Local Characterization of Rock Classification by Gabor and DCT Filters with a Color Texture Descriptor. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2023, 10, 2019. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example of 6 × 6 gray scale image.
Figure 1. Example of 6 × 6 gray scale image.
Symmetry 15 01462 g001
Figure 2. Quantized image.
Figure 2. Quantized image.
Symmetry 15 01462 g002
Figure 3. Connected area after marking.
Figure 3. Connected area after marking.
Symmetry 15 01462 g003
Figure 4. Schematic diagram of LBP operation.
Figure 4. Schematic diagram of LBP operation.
Symmetry 15 01462 g004
Figure 5. The 3 × 3 area contains four groups of pixel pairs in the center of symmetry.
Figure 5. The 3 × 3 area contains four groups of pixel pairs in the center of symmetry.
Symmetry 15 01462 g005
Figure 6. Inverted transfer neural network structure.
Figure 6. Inverted transfer neural network structure.
Symmetry 15 01462 g006
Figure 7. Analysis and process description of this research.
Figure 7. Analysis and process description of this research.
Symmetry 15 01462 g007
Figure 8. 54 leather samples provided by manufacturers.
Figure 8. 54 leather samples provided by manufacturers.
Symmetry 15 01462 g008
Figure 9. Sample 1 takes the center 160 × 160 pixels.
Figure 9. Sample 1 takes the center 160 × 160 pixels.
Symmetry 15 01462 g009
Figure 10. (32-2) × (32-2) quantized value operation.
Figure 10. (32-2) × (32-2) quantized value operation.
Symmetry 15 01462 g010
Figure 11. The structure diagram of BPN network.
Figure 11. The structure diagram of BPN network.
Symmetry 15 01462 g011
Figure 12. BPN−A1 after training.
Figure 12. BPN−A1 after training.
Symmetry 15 01462 g012
Figure 13. BPN−A1 linear regression graph after training.
Figure 13. BPN−A1 linear regression graph after training.
Symmetry 15 01462 g013
Figure 14. BPN−A2 after training.
Figure 14. BPN−A2 after training.
Symmetry 15 01462 g014
Figure 15. BPN−A2 linear regression graph after training.
Figure 15. BPN−A2 linear regression graph after training.
Symmetry 15 01462 g015
Figure 16. BPN−A3 after training.
Figure 16. BPN−A3 after training.
Symmetry 15 01462 g016
Figure 17. BPN−A3 linear regression graph after training.
Figure 17. BPN−A3 linear regression graph after training.
Symmetry 15 01462 g017
Figure 18. BPN−A4 after training.
Figure 18. BPN−A4 after training.
Symmetry 15 01462 g018
Figure 19. BPN−A4 linear regression graph after training.
Figure 19. BPN−A4 linear regression graph after training.
Symmetry 15 01462 g019
Figure 20. Computer-aided leather footwear evaluation query smart design system.
Figure 20. Computer-aided leather footwear evaluation query smart design system.
Symmetry 15 01462 g020
Figure 21. The computer-aided leather footwear evaluation query smart design system.
Figure 21. The computer-aided leather footwear evaluation query smart design system.
Symmetry 15 01462 g021
Figure 22. General symmetrical shoe design sketch.
Figure 22. General symmetrical shoe design sketch.
Symmetry 15 01462 g022
Figure 23. Application of leather samples and shoe design.
Figure 23. Application of leather samples and shoe design.
Symmetry 15 01462 g023
Table 1. Connection Components Table.
Table 1. Connection Components Table.
MarkABCDE
Color12131
Quantity1215315
Table 2. Color adhesion vector table.
Table 2. Color adhesion vector table.
Color123
α17(12 + 5)150
β301
Table 3. Design of leather perceptual visual and tactile questionnaire.
Table 3. Design of leather perceptual visual and tactile questionnaire.
1234567
Classic Avant-garde
Warm Breathable
Casual Sporty
Refined Coarse
Stable Flamboyant
Fashionable Vintage
Smooth Rough
Durable Fragile
Heavy Lightweight
Soft Stiff
Table 4. Center sample selection for BPN neural verification.
Table 4. Center sample selection for BPN neural verification.
Number2032333751
Sample pictureSymmetry 15 01462 i001Symmetry 15 01462 i002Symmetry 15 01462 i003Symmetry 15 01462 i004Symmetry 15 01462 i005
Table 5. Verification of the error rate of the perceptual–vocabulary pair value obtained by BPN−A1.
Table 5. Verification of the error rate of the perceptual–vocabulary pair value obtained by BPN−A1.
Sample Classic—Avant-GardeWarm—BreathableCasual—SportyRefined—CoarseStable—FlamboyantFashionable—VintageSmooth—RoughDurable—FragileHeavy—LightweightSoft—Stiff
20Test Results0.40720.77370.5860.58960.45520.57920.60420.66980.70360.5896
Questionnaire Results0.40710.58570.44290.65710.36430.76430.65710.62140.58570.4643
Error rate0.00%32.10%32.33%10.28%24.95%24.22%8.05%7.78%20.12%26.98%
32Test Results0.69890.59690.59970.52590.63880.33280.5560.38840.43530.6886
Questionnaire Results0.69290.50710.50710.45710.70710.40710.550.46430.46430.7643
Error rate0.88%17.70%18.24%15.03%9.66%18.27%1.09%16.34%6.24%9.90%
33Test Results0.50580.6280.46780.70420.43280.56770.64740.7150.810.7181
Questionnaire Results0.47860.60710.46430.650.450.57860.54290.76430.70710.5929
Error rate5.69%3.43%0.77%8.34%3.82%1.88%19.26%6.44%14.55%21.13%
37Test Results0.32690.39750.51510.78110.47290.74860.87160.53830.69550.7248
Questionnaire Results0.42140.49290.350.70710.60710.59290.650.80710.83570.7357
Error rate22.42%19.36%47.17%10.45%22.11%26.27%34.10%33.31%16.77%1.49%
51Test Results0.80770.67540.59840.61430.81230.30520.73580.65250.8120.4911
Questionnaire Results0.50.61430.46430.550.85710.47860.76430.70710.76430.7286
Error rate61.54%9.95%28.88%11.69%5.23%36.22%3.72%7.72%6.24%32.60%
Table 6. Verification of the error rate of the perceptual–vocabulary pair value obtained by BPN−A2.
Table 6. Verification of the error rate of the perceptual–vocabulary pair value obtained by BPN−A2.
Sample Classic—Avant-GardeWarm—BreathableCasual—SportyRefined—CoarseStable—FlamboyantFashionable—VintageSmooth—RoughDurable—FragileHeavy—LightweightSoft—Stiff
20Test Results0.41670.7030.49850.53160.39680.51670.56490.5980.63070.6529
Questionnaire Results0.40710.58570.44290.65710.36430.76430.65710.62140.58570.4643
Error rate2.34%20.02%12.57%19.11%8.92%32.39%14.04%3.78%7.68%40.63%
32Test Results0.79520.54220.54670.60050.68270.40790.67770.45620.48760.7811
Questionnaire Results0.69290.50710.50710.45710.70710.40710.550.46430.46430.7643
Error rate14.77%6.91%7.80%31.35%3.46%0.18%23.22%1.74%5.02%2.20%
33Test Results0.31250.5640.44290.52250.35330.71050.49280.78070.81560.5757
Questionnaire Results0.47860.60710.46430.650.450.57860.54290.76430.70710.5929
Error rate34.69%7.11%4.61%19.61%21.50%22.80%9.22%2.15%15.33%2.89%
37Test Results0.27870.53210.23370.46840.39820.490.5790.79550.83920.7555
Questionnaire Results0.42140.49290.350.70710.60710.59290.650.80710.83570.7357
Error rate33.86%7.96%33.22%33.76%34.41%17.35%10.92%1.45%0.41%2.69%
51Test Results0.73840.4580.60840.50470.92180.22590.91860.62320.29840.6667
Questionnaire Results0.50.61430.46430.550.85710.47860.76430.70710.76430.7286
Error rate47.68%25.44%31.04%8.23%7.54%52.80%20.20%11.87%60.96%8.49%
Table 7. The error rate of the perceptual vocabulary pair value obtained by verifying BPN−A3.
Table 7. The error rate of the perceptual vocabulary pair value obtained by verifying BPN−A3.
Sample Classic—Avant-GardeWarm—BreathableCasual—SportyRefined—CoarseStable—FlamboyantFashionable—VintageSmooth—RoughDurable—FragileHeavy—LightweightSoft—Stiff
20Test Results0.45480.7580.46640.74780.3310.63680.79970.60980.63620.7465
Questionnaire Results0.40710.58570.44290.65710.36430.76430.65710.62140.58570.4643
Error rate11.70%29.42%5.32%13.80%9.13%16.67%21.69%1.87%8.63%60.78%
32Test Results0.75780.58310.48590.43490.75580.41260.5480.60610.58530.6416
Questionnaire Results0.69290.50710.50710.45710.70710.40710.550.46430.46430.7643
Error rate9.37%14.97%4.19%4.87%6.88%1.34%0.37%30.54%26.06%16.05%
33Test Results0.37070.51330.40740.59210.4950.68680.48920.72810.76960.5278
Questionnaire Results0.47860.60710.46430.650.450.57860.54290.76430.70710.5929
Error rate22.54%15.45%12.24%8.91%10.00%18.70%9.88%4.73%8.84%10.97%
37Test Results0.51920.50650.39190.61550.54360.47180.67170.78390.82410.6983
Questionnaire Results0.42140.49290.350.70710.60710.59290.650.80710.83570.7357
Error rate23.20%2.76%11.98%12.96%10.46%20.42%3.33%2.88%1.39%5.09%
51Test Results0.44630.48210.55980.38680.54470.2540.72670.59770.74140.9124
Questionnaire Results0.50.61430.46430.550.85710.47860.76430.70710.76430.7286
Error rate10.73%21.51%20.57%29.68%36.46%46.94%4.92%15.48%2.99%25.23%
Table 8. The error rate of the perceptual vocabulary pair value obtained by verifying BPN−A4.
Table 8. The error rate of the perceptual vocabulary pair value obtained by verifying BPN−A4.
Sample Classic—Avant-GardeWarm—BreathableCasual—SportyRefined—CoarseStable—FlamboyantFashionable—VintageSmooth—RoughDurable—FragileHeavy—LightweightHeavy—Lightweight
20Test Results0.35480.58580.27490.47560.35730.7380.67970.74410.76610.4261
Questionnaire Results0.40710.58570.44290.65710.36430.76430.65710.62140.58570.4643
Error rate12.84%0.02%37.92%27.63%1.92%3.44%3.43%19.74%30.81%8.22%
32Test Results0.87390.56440.57960.39690.83820.20640.63210.55430.53990.5595
Questionnaire Results0.69290.50710.50710.45710.70710.40710.550.46430.46430.7643
Error rate26.13%11.28%14.30%13.17%18.53%49.31%14.93%19.38%16.28%26.79%
33Test Results0.35120.55970.41840.68190.38280.71250.58610.66460.70850.7143
Questionnaire Results0.47860.60710.46430.650.450.57860.54290.76430.70710.5929
Error rate26.61%7.82%9.87%4.91%14.92%23.16%7.97%13.05%0.19%20.49%
37Test Results0.18620.4170.38510.59370.25570.67390.44370.52710.78350.8922
Questionnaire Results0.42140.49290.350.70710.60710.59290.650.80710.83570.7357
Error rate55.81%15.39%10.02%16.04%57.88%13.67%31.75%34.69%6.25%21.27%
51Test Results0.7970.62390.5220.46780.83170.36170.59530.76510.81150.4159
Questionnaire Results0.50.61430.46430.550.85710.47860.76430.70710.76430.7286
Error rate59.40%1.57%12.44%14.94%2.97%24.42%22.11%8.19%6.17%42.92%
Table 9. Results of ANOVA.
Table 9. Results of ANOVA.
Perceptual Vocabulary Pair (Mean ± SD)Fp
Classic—Avant-Garde(n = 5)Warm—Breathable(n = 5)Casual—Sporty(n = 5)Refined—Coarse(n = 5)Stable—Flamboyant(n = 5)Fashionable—Vintage(n = 5)Smooth—Rough(n = 5)Durable—Fragile(n = 5)Heavy—Lightweight(n = 5)Heavy—Lightweight(n = 5)
LBP18.11 ± 25.9016.51 ± 10.8025.48 ± 17.2611.16 ± 2.4713.15 ± 9.7721.37 ± 12.6713.24 ± 13.5714.32 ± 11.3312.78 ± 6.2918.42 ± 12.660.5360.839
SCOV26.67 ± 17.9713.49 ± 8.6617.85 ± 13.3622.41 ± 10.3515.17 ± 12.6925.10 ± 19.4115.52 ± 6.004.20 ± 4.3817.88 ± 24.6811.38 ± 16.551.0370.428
VAR15.51 ± 6.7816.82 ± 9.7910.86 ± 6.5714.04 ± 9.4414.59 ± 12.3120.81 ± 16.468.04 ± 8.3711.10 ± 12.159.58 ± 9.7923.62 ± 22.050.8190.602
SAC36.16 ± 20.387.22 ± 6.4716.91 ± 11.8915.34 ± 8.1419.24 ± 22.7922.80 ± 17.0616.04 ± 11.2819.01 ± 9.9911.94 ± 12.0323.94 ± 12.591.5190.175
Table 10. Comparison of mean and standard deviation of 4 color and texture processing methods.
Table 10. Comparison of mean and standard deviation of 4 color and texture processing methods.
nMinMaxMeanSDMed
LBP500.00061.54016.45413.04414.790
SCOV500.18060.96016.96614.74712.220
VAR500.37060.78014.49812.02611.335
SAC500.02059.40018.85914.82114.935
Table 11. Single-sample t-test of shoe design image and leather sample image.
Table 11. Single-sample t-test of shoe design image and leather sample image.
Emotional VocabularydfMINMAXMeanSDtp
Classic—Avant-garde21.7502.1501.9670.20216.8570.004 **
Warm—Breathable22.0002.3502.1670.17621.3720.002 **
Casual—Sporty21.3503.0001.9330.9253.6200.069
Refined—Coarse22.7004.0003.4830.6908.7460.013 *
Stable—Flamboyant24.3505.0004.6000.35022.7640.002 **
Fashionable—Vintage23.7504.1503.9670.20234.0000.001 **
Smooth—Rough24.0005.7504.7670.8959.2260.012 *
Durable—Fragile22.0002.4502.2000.22916.6300.004 **
Heavy—Lightweight23.0003.4503.1670.24722.2380.002 **
Soft—Stiff25.0005.8505.3670.43721.2780.002 **
* p < 0.05 ** p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, D.-D.; Wu, C.-F.; Wang, C.-S. A Study on Perception of Visual–Tactile and Color–Texture Features of Footwear Leather for Symmetric Shoes. Symmetry 2023, 15, 1462. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15071462

AMA Style

Xu D-D, Wu C-F, Wang C-S. A Study on Perception of Visual–Tactile and Color–Texture Features of Footwear Leather for Symmetric Shoes. Symmetry. 2023; 15(7):1462. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15071462

Chicago/Turabian Style

Xu, Dan-Dan, Chih-Fu Wu, and Chung-Shing Wang. 2023. "A Study on Perception of Visual–Tactile and Color–Texture Features of Footwear Leather for Symmetric Shoes" Symmetry 15, no. 7: 1462. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15071462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop