Next Article in Journal
The Evaluation of an Asymptotic Solution to the Sommerfeld Radiation Problem Using an Efficient Method for the Calculation of Sommerfeld Integrals in the Spectral Domain
Previous Article in Journal
A Secure Live Signature Verification with Aho-Corasick Histogram Algorithm for Mobile Smart Pad
Previous Article in Special Issue
Toward an Advanced Human Monitoring System Based on a Smart Body Area Network for Industry Use
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Body Size Measurement Using a Smartphone

by
Kamrul Hasan Foysal
1,
Hyo-Jung (Julie) Chang
2,*,
Francine Bruess
2 and
Jo-Woon Chong
1,*
1
Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA
2
Department of Hospitality and Retail Management, Texas Tech University, Lubbock, TX 79409, USA
*
Authors to whom correspondence should be addressed.
Submission received: 26 March 2021 / Revised: 28 April 2021 / Accepted: 10 May 2021 / Published: 2 June 2021
(This article belongs to the Special Issue Smart Bioelectronics and Wearable Systems)

Abstract

:
Measuring body sizes accurately and rapidly for optimal garment fit detection has been a challenge for fashion retailers. Especially for apparel e-commerce, there is an increasing need for digital and convenient ways to obtain body measurements to provide their customers with correct-fitting products. However, the currently available methods depend on cumbersome and complex 3D reconstruction-based approaches. In this paper, we propose a novel smartphone-based body size measurement method that does not require any additional objects of a known size as a reference when acquiring a subject’s body image using a smartphone. The novelty of our proposed method is that it acquires measurement positions using body proportions and machine learning techniques, and it performs 3D reconstruction of the body using measurements obtained from two silhouette images. We applied our proposed method to measure body sizes (i.e., waist, lower hip, and thigh circumferences) of males and females for selecting well-fitted pants. The experimental results show that our proposed method gives an accuracy of 95.59% on average when estimating the size of the waist, lower hip, and thigh circumferences. Our proposed method is expected to solve issues with digital body measurements and provide a convenient garment fit detection solution for online shopping.

1. Introduction

As the world becomes more technology-driven and consumers prioritize convenience for their shopping and lifestyles, smartphones serve important roles in diverse areas of our daily lives. The fashion industry, especially fashion e-commerce companies, often fails to prevent product returns due to improper fits. While returns of in-store clothing purchases are about 5–10%, online return percentages have soared up to 40% [1]. In fact, as the fashion e-commerce industry continues its growth, it is forecasted to be facing a trillion-dollar problem in the form of returns [1]. This industry challenge shows the potential of smartphones to provide a convenient and rapid fit detection method to solve the sales and return issues in the industry. Based on the advanced methods of using smartphones, we can take body measurements more accurately and compare different clothing options based on these measurements. Thus, fashion businesses can eventually take advantage of increasing sales and decreasing returns, which is currently a huge waste [1]. Even though some technology companies have been trying to solve digital fit issues, there are still unresolved pains in the fashion market, as these innovations are in their early stages.
Various body measurement tools have been developed in recent years, with many utilizing different techniques to obtain body measurements. Some of the most common methods include full-sized 3D body scanners, mobile or handheld scanners, and wearable scanners [2,3]. However, weaknesses in all of these devices exist due to the size of the measuring devices and their inconvenient ways of taking body measurements. Therefore, one type of scanner, the mobile smartphone-based scanner, shows promise for mass market acceptance [4,5].
In current mobile 3D scanning technology, a majority of the users, especially fashion consumers, would take advantage of the service of obtaining body measurements for the proper garment fit selection [4]. Additionally, a scanner with the ability to translate sizing between brands would be one of the most sought-after technology features [4]. This type of the scanner insinuates a high level of usefulness for diverse categories of retailers all the way from small boutique-style stores to large national department stores.
Unfortunately, there are issues surrounding the consistency of these relatively new mobile scanning tools [6]. These issues mostly border on human error due to the chosen positioning of the subject and the background the scan is performed in front of [6]. Most scanners could increase accuracy when less clothing is worn by the subject because the mobile scanners cannot perfectly distinguish between clothing and the human body. This leads to hesitance from users in fear of someone seeing them in minimal clothing and the potential of photos becoming public [7]. Because of this, users are less likely to wear appropriate clothing for the most accurate scans, increasing errors of their measurements.
Hence, it is challenging for fashion retailers to implement these types of technology in fear that their customers may get inaccurate measurements and recommendations made by the currently available technology, which potentially has issues with accuracy and speed. Even with the advancement of technology, there is still no reliable existing measurement software. Thus, there is a need for the development of technology to take the body measurements of individuals in an accurate, rapid, and comprehensive way. By improving the current challenges to overcome the technological limitations, industry experts anticipate mobile smartphone scanners to become a useful and cost-effective way to measure the consumer’s body size for the benefits of both retailers and consumers [2].
In addition, the convenience and affordable price point of mobile scanners in comparison with their larger, more expensive counterparts (e.g., 3D full-body scanners) offer a high feasibility of adopting such technology by the fashion industry [2]. Therefore, this also shows that the potential of multi-million dollar profits can be saved by reducing the probability of online returns [8]. The accurate body measurements can help consumer select accurate clothing size and fit, in addition to the precise body shape detection assessed previously [9]. Based on the current issue of taking body measurements, we are proposing a novel method to obtain body measurements using a smartphone in an accurate, fast, and portable way. We specifically focused on body measurements for pants, which clothing customers have fit issues with the most [10]. Body measurements of three areas, including the waist, lower hip and thigh, are used in this study.

2. Materials

Our body measurement data was collected following the IRB protocol (IRB2020-482) at Texas Tech University. According to this IRB, 12 subjects (10 male, 2 female) were recruited. The experimenter captured each subject’s frontal and lateral images using our developed smartphone application. After that, we asked the subjects to measure their preferred waistline, lower hip, and thigh circumferences using a measurement tape to obtain a gold standard reference for the purpose of comparing it to the estimation result of our proposed method. All the subjects’ data were deidentified following the IRB protocol to protect their privacy. From our assessment, neither the body shape nor any other parameters depended on the gender of the subject, as our method took only the silhouette of the subject based on their body measurements. Therefore, no discrimination between the subjects was performed based on gender. No identifiable parameter was obtained.
In our proposed method, a smartphone image was used to obtain body sizes from users with 2D images captured by a smartphone camera. Intra-observer (when two self-measurements were compared) and intra-observer (when a self-measurement was compared to the technician’s measurement) measurement errors ranged from 2 mm to 20 mm [11,12] when performing anthropometric measurements using a measurement tape. The error in waist measurement was obtained by following Equation (1) below:
e r r o r waist = e r r o r height h e i g h t   m e a s u r e m e n t × w a i s t   m e a s u r e m e n t ,
In the proposed method, a 2 cm error in the height measurement for a person of a standard height (5 ft 8 in or 172.72 cm) introduced an error of 1 cm or 0.3937 in (= 2 172.72 × 86.36 cm) for 34 in (86.36 cm), according to Equation (1). However, this error was much smaller compared with the technician measurement with intra-observer correlation of 0.96 and with inter-observer correlation of 0.93 [13], which could lead to a more reliable result compared with the calculation, estimation, and technical errors.
The original measurements were taken with a measurement tape while the subjects were wearing clothes. Specifically, the measurement procedure was performed on the clothing over the skin. When the body size was measured with the tape, the error was minimized by including three different readings for the same measurement and taking the mean of them. For the measurement process, the experimenter guided the subjects to acquire measurements from specific locations of the body by following the body’s landmarks. For example, the body areas including the iliac crest, narrowest abdominal part, and preferred waistline are the locations for the waist circumference measurement. Each subject was asked to stand in a standardized posture, and we used two silhouette images (front and side) to obtain the measurements with high accuracy [14].

3. Methods

3.1. Reference-Free Data Acquisition

There have been smartphone-based body measurements methods developed previously [15]. However, these methods did not provide high accuracy [14,16], resulting in errors from 9 mm to 42 mm and providing precisions of 0.7801 ± 0.0689 and recalls of 0.8952 ± 0.0995, respectively. Moreover, due to diverse smartphone brands with their own camera specifications and processing abilities in the current market and the complexity of 3D reconstruction, the conventional methods were not accurate [17].
There has been smartphone-based body measurement research conducted which used a subject’s personal credit card as reference. Specifically, Spector et al. [18] showed an approach of measuring the dimensions of a target object with a user device as a reference object. However, with the reference objects being small and not being invariant in size or available all the time, this approach is inconvenient. Moreover, the user is required to have that specific reference object when they want to measure their size. Our proposed method uses the subject’s height as a known parameter, and with the help of image processing and 3D reconstruction methods, it delivers an accurate result invariant to camera specifications and processing power, resulting in a better estimation of body size measurements. Our developed smartphone application flowchart is given in Figure 1.

3.2. Measurement Process

The body measurement process requires a reference object for the estimation of units from the number of pixels. Due to the unavailability of a universal reference object, we use the individual’s height as the reference. Therefore, the user of the application is prompted to insert his or her height before capturing the image. The pixel sizes are the same in both directions, which means that the pixels of the acquired images are squares [19,20]. Hence, the unit only calculated on one axis can be used for the other axis.
The image capture process is shown in Figure 2. A second person is needed to capture the image accurately, following the instructions stated in the application. The smartphone camera uses geometric calibration internally to discard lens distortion [21]. The smartphones, such as a Samsung Galaxy Note 10+, Pixel 3a, or Xiaomi Redmi Note 5 Pro, have an internal lens correction algorithm [22] in which distortion curves are represented as odd degree polynomials and correct them in a linear trend [23]. In this paper, we adopted the Pixel 3a phone, which uses this internal lens correction algorithm [24]. When the user captures the image, it is mandatory for the user to keep the phone level horizontally and vertically to the ground and fit his whole body inside the camera view. In this way, the height of the captured image is used as the reference, calculating the pixel-to-unit ratio by following Equation (2):
R a t i o = H l ,
where H is the height (in inches) and l is the length of the captured image in pixels.

3.3. Preprocessing the Captured Image

3.3.1. Image Grayscale

The captured image needs to be preprocessed in order to be used for measurement calculation. The image processing is done within the smartphone application without complexity. Therefore, the processing is fast, easy, and in real time.
The image processing technique incorporated in the smartphone application binarizes the image. Hence, the captured image is converted to a black-and-white image using a threshold. Effectively, the smartphone captures the image as an RGB image where R, G, and B represent the red, green, and blue channels, respectively, as shown in Figure 3. At first, the RGB image is converted into a grayscale image. Each color channel is converted to a specific value to determine the grayness of the converted image. Specifically, we used luminescence-based grayscaling. Luminescence matches the perception of human brightness and uses a weighted combination of the RGB channels. The luminescence method is used by standard image processing software. It is implemented by MATLAB’s “rgb2gray” function and used in computer vision applications [25]. Equation (3) represents the grayscale value of a specific pixel of the RGB image:
Gr = 0.299 × R + 0.587 × G + 0.114 × B,
where Gr is the grayscale value and R, G, and B are the red, green, and blue channel values of the pixel, respectively.

3.3.2. Otsu’s Thresholding Method

For binarizing the image, automatic thresholding is used. For this purpose, Otsu’s thresholding method is utilized due to its simplicity and application in smartphones [26]. The soft thresholding (Otsu) method relies on the image histogram to calculate the optimal threshold value [27]. The statistical separation of the pixels into two classes to binarize the image is done with the Otsu method, and the threshold value is obtained using moments of the first two orders (i.e., mean and standard deviation). The normalized histogram, h is expressed as:
h i = n i N ,
where ni is the number of pixels at a particular illumination, N is the total number of pixels in the image, and L is the gray levels in the image. The mean ( µ ) and standard deviation (σ) are shown in Equations (5) and (6), respectively:
µ T = i = 1 L i × h i ,
σ T = i = 1 L h i ,
The value of T that maximizes the function s b is considered to be the optimal threshold value. Thus, for binarization of a greyscale image using Otsu’s method, the threshold value, T is determined as follows:
T = argmax 1 T L { s b } = max σ 0 T μ 0 T μ 2 + 1 σ 0 T μ 1 T μ 2 .
where, μ is the mean level of the image, and s b is the inter-class variance.
The optimal threshold value is used for the grayscale value of the specific pixel index to obtain the binary image. If the grayscale value of each pixel is more than the threshold value, the pixel is converted to white; otherwise, it is converted to black. By calculating the whole image, the grayscale image is converted to a black-and-white image, making each pixel black or white. This way, image binarization is performed. Figure 4 shows the binarization process.

3.3.3. Background Removal

In our proposed algorithm, the background of the acquired image is removed, and only the image of the subject is extracted by taking another image without the subject. The images are converted to a grayscale, and the subject is extracted by subtracting the background from the image with the subject. The proposed algorithm can measure subjects’ body sizes even when the subjects have lighter clothes than the background. Specifically, the subjects were not asked to wear any specific-colored clothing, and the subject area was extracted using our proposed background removal method. Equation (8) is used in the background removal algorithm:
I silhouette = I subject I background
where I subject is the grayscale image with the subject, I background is the grayscale image of the background, and I silhouette is the silhouette image of the subject. The extracted image is then processed using our proposed image processing methods to extract the region covered by the subject. Hence, the process is invariant to the color of the clothing that the subject is wearing. Figure 5 shows the proposed background removal process.
For the image registration procedure in this paper, the two images must be registered onto each other. Because image capturing by the smartphone will mostly be done by another person rather than a stable tripod, the captured images will certainly have geometrical transformation (e.g., translation, rotation, or affine transformation). For the image registration, we used speeded up robust features (SURF) [28]. SURF detects the feature points (blobs, edges, or corners) from the two images and matches them onto each other. The image registration process includes feature detection, matching, transformation, and image warping and blending [29]. In our proposed method, we used the SURF feature point detection algorithm. The detected features were matched to determine the overlapping regions between the two images. False matched pairs were estimated using the RANSAC algorithm [30,31]. Finally, the background image was warped into a mosaic image and blended to the subject image using the multi-band blending algorithm [32]. In our proposed method, we kept the subject’s frontal image fixed, and the subject’s lateral image, as well as background image, was transformed to be registered on the fixed image. The image registration process is shown in Figure 6.

3.4. Obtaining Measurements

To obtain the body measurements of the individual, a unique method was applied. The human body follows a proportion. There is no particular way of determining the waistline’s location, and it depends on personal preference. However, it is considered to be around the level of the umbilicus [33,34]. In our proposed method, we followed the ISO guideline [35] to determine the waistline’s location, as there is no standard landmark for the waistline and it depends on the subject’s anatomy and perception [36]. According to the human body ratio, the waist of an individual is approximately at three eighths of their height, and visually aesthetic clothing should have an upper body to lower body ratio of 0.372:0.618 [37]. Similarly, the lower hip region is at half of the height, and the thigh is at five eighths of the height. Different studies incorporated a similar approach for human detection in images and videos [38] as shown in Figure 7.
There are exact sites for locating the waistline [39] and guidelines for locating the exact waistline position (e.g., ISO [35]) based on body landmarks. However, users with different body shapes or obesity may have trouble measuring their waistlines. Hence, in this proposed method of body size measurement, we introduced a unique way of locating body landmarks based on body proportion which defines the exact position of the waistline at three eighths of the subject’s height. Nonetheless, the waistline location relies on personal preference. Different people of the same height may have different waistline preferences [40]. Studies show that there is strong correlation between the preferred waistline and the ISO-mentioned waistline [36].
In our proposed method, we incorporated three parameters to predict users’ preferred waistline heights: (1) height (x-axis values in Figure 8b,c), (2) the waistline based on body proportion (three eighths of the height), and (3) the height of the narrowest abdominal part. The individual preference for the height of the waistline was obtained from nine subjects.
The subjects were asked to locate their preferred waistline and measure the height of their preferred waistline (red marks in Figure 8b) from the ground using a measurement tape. (1) The subject’s height was obtained as user input. Using the developed SmartFit Measurements application, (2) the waistline location from the body proportions and (3) the narrowest abdomen locations were obtained from calculation and image processing, respectively. A regression curve was obtained using a neural network to predict the user’s preferred height of the waistline from the three input parameters. A two-layer, 50 hidden units feedforward neural network was trained with a learning rate of 0.00005 and a maximum epoch number of 50,000 to obtain the regression curve using a mean squared loss function. The regression curve resulted in a mean error of 0.8570 inches and standard deviation of 0.7914 inches with an R2 value of 0.75021. Using image processing techniques, the waist circumference was obtained at the preferred waistline. Figure 8a shows the neural network model, and Figure 8b shows the regression curve which was used to obtain the user’s preferred height of the waistline. A two-degree polynomial fitted regression curve was also obtained for comparison, resulting in an R2 value of 0.8947 as shown in Figure 8c. The regression curve obtained by a neural network showed a bias lower than the original height of the preferred waistlines because of the trend of the two input parameters (2. the waistline location, and 3. the narrowest abdomen locations, mentioned above).
For the lower hip and thigh circumference measurements, we relied on the definition of the hip from ISO [35] when taking the original measurement. According to the definition, the lower hip circumference was measured around the buttocks area at the level of the greatest lateral projections horizontally while the subject is in a standing position. The highest thigh position was measured for the thigh circumference. The SmartFit Measurement application uses image processing techniques to obtain the lower hip circumference (by taking the maximum width of the silhouette image of the subject in the lower body region) and the body proportion algorithm to obtain the thigh circumference (at five eighths of the body height from the top of the image).
By using the binarized image, the measurements were then obtained. For the measurements, the obtained region was converted to a silhouette image. At the preferred waistline position of the subject’s image across the width, the pixel values were calculated. Each value of the white pixels was 256, and each value of the black pixels was 0. Therefore, the sum of the pixel values represents the white rows of the complemented image. When divided by 256, the number of pixels covering the human body on that row is obtained. Figure 9 shows the pixel representation of the binary image and the process of body measurement.

3.5. 3D Reconstruction of the Measurement Areas

In our proposed body size measurement technique, we focused on three measurements, namely the waist, lower hip, and thigh circumferences. From the frontal and lateral measurements obtained, the waist, lower hip, and thigh areas were calculated, which were considered to be ellipse-shaped [15]. The axis lengths were obtained, and using Equation (9), the approximated values for the measurements were calculated. Figure 10 shows the cross-sectional area and measurement for the waist, lower hip, and thigh regions of a subject. Equation (9) was used to determine the body measurements from the image:
C = 2 × π × a 2 + b 2 2 ,
where a and b are the long- and short-axis lengths, respectively, and C is the circumference of the regions, as shown in Figure 10.
The measurements were then incorporated with an existing 3D mannequin to emulate the 3D reconstruction of the user’s body as shown in Figure 11.

3.6. Developed Smartphone Application

A smartphone application, SmartFit Measurement, was developed on the android platform [41]. The application runs on Android 7 and later versions. Specifically, the Google Pixel 3a android smartphone was used to obtain the measurements with the developed SmartFit Measurement application. The application was also tested on other android phones (e.g., the Xiaomi Redmi Note 5 Pro and Samsung Galaxy Note 10+) with the same measurement accuracy. The application is able to run on smartphones with android 7.0 (Nougat) [42] with a minimum of 2 GB of RAM and a 1.3 GHz quad core processor. The developed smartphone application follows each of the steps internally. The image processing and machine learning steps are therefore incorporated. Figure 12 shows each of the steps of using the application and the calculated measurement of the individual.

4. Results

A total of 12 subjects, as volunteers, were included in the study (as mentioned in Section 2. Materials). Each of the subjects used the application to measure the body measurements. On top of that, original measurements were also obtained by a manual process (i.e., tape measurement at their preferred waistline, lower hip, and thigh regions). The three subsequent measurements were waist, lower hip, and thigh circumferences, respectively. The results are mentioned in Table 1.
Data acquisition was performed considering the different kinds of clothing that the subjects wore. For example, subjects wearing loose clothing, faded jeans, camouflage clothing, or light-colored clothing were also included in this study. Figure 13 shows the bar chart of the original and obtained measurements.
Our proposed technique resulted in 95.59% accuracy with a standard deviation of error of 1.898 inches from the obtained data (mentioned in Table 1) using Equations (10) and (11) below, respectively:
A c c u r a c y = 1 O r i g i n a l   m e a s u r e m e n t O b t a i n e d   m e a s u r e m e n t O r i g i n a l   M e a s u r e m e n t × 100 %
S t d .   d e v i a t i o n = O r i g i n a l   m e a s u r e m e n t O b t a i n e d   m e a s u r e m e n t 2 N u m b e r   o f   S a m p l e s
Performing a paired t-test, we have the evidence to believe that there is no significant difference between the original and obtained measurements. At a 95% confidence interval, the error remains from −0.72 to 0.34 inches with a margin of error of 0.5346 inches. Discrepancies in measurements may arise due to the clothing preference, as some of the subjects felt comfortable in loose clothing. A total of three subjects wore loose clothing. The accuracy of the measurements for the subjects wearing loose clothing was 93.49%, whereas subjects wearing fitted clothing resulted in an accuracy of 96.17%. Hence, the obtained accuracy was 2.7% lower due to the error for loose and faded color clothing compared with fitted clothing. Specifically, the accuracy with 10 males was 94.90%, while the accuracy with 2 females was 96.73%.

5. Discussion

The proposed smartphone application in this study uses a unique method to obtain the measurements which is more convenient than the existing 3D reconstruction-based time of flight (ToF) camera methods, as shown in Table 2.
In this paper, we have explored the usability of smartphones in online shopping for fashion garment products, especially pants, as the impact of smartphone-based mobile shopping has yet to be explored [45,46]. Our proposed smartphone application is expected to perform in other mobile devices (e.g., tablets) or PCs in the future. As for future work, the effect of posture variability on the accurate extraction of the measurements will be considered. The developed smartphone application, named SmartFit Measurement, has the adaptability to be linked to any manufacturer product database, and using a specific garment suggestion algorithm, suggested clothing can be provided to the consumers, which is expected to be evaluated in our future work.

6. Conclusions

In this paper, we have assessed the usability of an easy, convenient, and accurate smartphone-based method to measure the body measurements of an individual. This application has the potential to solve the garment fitting issues with the current methods available and to provide a better alternative in the market by using a unique algorithm to obtain body measurements. Furthermore, this could be an essential way to reduce product returns due to incorrect fitting, which has been a huge pain for the fashion industry. With an accuracy of 95.59%, this solution is expected to replace the existing methods for a convenient garment shopping experience for consumers and increase revenue for the online apparel e-commerce industry. Our future work is to measure the different areas of the body, as well as to connect the body measurement data collected from our method to the garment data for accurate garment fit detection.

Author Contributions

K.H.F. designed the analysis, developed the software, wrote the original and revised manuscript, and conducted data analysis and details of the work; H.-J.C. designed the research experiment, verified data, and conducted statistical analysis; F.B. verified the data and analysis; and J.-W.C. conceptualized and designed the research experiment, wrote the original and revised drafts, designed, redesigned, and verified the image data analysis, and guided the direction of the work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted following the IRB protocol (IRB2020-482) at Texas Tech University.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reagan, C. That Sweater you Don’t Like Is a Trillion-Dollar Problem for Retailers. These Companies Want to Fix It. January 2019. Available online: https://www.cnbc.com/2019/01/10/growing-online-sales-means-more-returns-and-trash-for-landfills.html (accessed on 26 April 2021).
  2. Xia, S.; Guo, S.; Li, J.; Istook, C. Comparison of body measuring techniques: Whole body scanner, handheld scanner, and tape measure. In Proceedings of the International Textile and Apparel Association Annual Conference, St. Petersburg, FL, USA, 14–18 November 2017. [Google Scholar]
  3. Shpunt, A.; Pesach, B.; Akerman, R. Scanning Projectors and Image Capture Modules for 3D Mapping. U.S. Patent 9,098,931, 4 August 2015. [Google Scholar]
  4. Peng, F.; Sweeney, D.; Delamore, P. Improved fit: Designing 3D body scanning apps for fashion . In Proceedings of the 5th International Conference on Mass Customization and Personalization in Central Europe (MCP-CE 2012), Novi Sad, Serbia, 19–21 September 2012. [Google Scholar]
  5. Chang, H.J.J.J.; Bruess, F.; Chong, J.W.; Foysal, K. Retail Technologies Leading Resurgence for Small Independent Fashion Retailers: A Thematic Analysis Related to the TOE Framework. In Proceedings of the International Textile and Apparel Association Annual Conference, online. 18–20 November 2020. [Google Scholar]
  6. Simmons, K.P.; Istook, C.L. Body measurement techniques. J. Fash. Mark. Manag. Int. J. 2003, 7, 306–332. [Google Scholar] [CrossRef]
  7. Xia, S.; West, A.; Istook, C.; Li, J. Acquiring accurate body measurements on a smartphone from supplied colored garments for online apparel purchasing platforms and e-retailers. In Proceedings of the 3DBODY.TECH 2018—9th International Conference and Exhibition on 3D Body Scanning and Processing Technologies, Lugano, Switzerland, 16–17 October 2018; pp. 126–130. [Google Scholar]
  8. Daanen, H.; Psikuta, A. 3D body scanning. In Automation in Garment Manufacturing; Nayak, R., Padhye, R., Eds.; The Textile Institute Book Series; Elsevier: Amsterdam, The Netherlands, 2018; Chapter 10. [Google Scholar]
  9. Foysal, K.H.; Chang, H.J.; Bruess, F.; Chong, J.W. SmartFit: Smartphone Application for Garment Fit Detection. Electronics 2021, 10, 97. [Google Scholar] [CrossRef]
  10. Khairnar, A.A. Understanding the Reasons for Fit Variation in Manufacturing of Denim Jeans and Ways to Reduce Fit Variation; North Carolina State University: Raleigh, NC, USA, 2019. [Google Scholar]
  11. Kouchi, M.; Mochimaru, M. Errors in landmarking and the evaluation of the accuracy of traditional and 3D anthropometry. Appl. Ergon. 2011, 42, 518–527. [Google Scholar] [CrossRef]
  12. Tipton, P.H.; Aigner, M.J.; Finto, D.; Haislet, J.A.; Pehl, L.; Sanford, P.; Williams, M. Consider the accuracy of height and weight measurements. Nursing 2012, 42, 50–52. [Google Scholar] [CrossRef] [PubMed]
  13. Kushi, L.H.; Kaye, S.A.; Folsom, A.R.; Soler, J.T.; Prineas, R.J. Accuracy and reliability of self-measurement of body girths. Am. J. Epidemiol. 1988, 128, 740–748. [Google Scholar] [CrossRef] [PubMed]
  14. Boisvert, J.; Shu, C.; Wuhrer, S.; Xi, P. Three-dimensional human shape inference from silhouettes: Reconstruction and validation. Mach. Vis. Appl. 2013, 24, 145–157. [Google Scholar] [CrossRef] [Green Version]
  15. Li, C.; Cohen, F. In-home application (App) for 3D virtual garment fitting dressing room. Multimed. Tools Appl. 2021, 80, 5203–5224. [Google Scholar] [CrossRef]
  16. Chen, Y.; Kim, T.-K.; Cipolla, R. Inferring 3D shapes and deformations from single views. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 300–313. [Google Scholar]
  17. McGhee, D.E.; Ramsay, L.G.; Coltman, C.E.; Gho, S.A.; Steele, J.R. Bra band size measurements derived from three-dimensional scans are not accurate in women with large, ptotic breasts. Ergonomics 2017, 61, 464–472. [Google Scholar] [CrossRef]
  18. Spector, D.; Nefian, A.; Joshi, P.V.; Zak, H. Determining Dimension of Target Object in an Image Using Reference Object. U.S. Patent 9,489,743, 7 October 2016. [Google Scholar]
  19. Pixel-Aspect-Ratio. Available online: https://en.wikipedia.org/wiki/Pixel_aspect_ratio (accessed on 26 April 2021).
  20. Tirunelveli, G.; Gordon, R.; Pistorius, S. Comparison of square-pixel and hexagonal-pixel resolution in image processing. In Proceedings of the IEEE CCECE 2002 Canadian Conference on Electrical and Computer Engineering (Cat. No.02CH37373), Winnipeg, MB, Canada, 12–15 May 2002; pp. 867–872. [Google Scholar]
  21. Kim, J.-S.; Jin, C.-G.; Lee, S.-K.; Lee, S.-G.; Choi, C.-U. Geometric calibration and accuracy evaluation of smartphone camera. J. Korean Soc. Geospat. Inf. Syst. 2011, 19, 115–125. [Google Scholar]
  22. Lens-Correction. Available online: https://www.samsung.com/us/support/troubleshooting/TSG01001426/ (accessed on 26 April 2021).
  23. Aicardi, I.; Dabove, P.; Lingua, A.; Piras, M. Sensors integration for smartphone navigation: Performances and future challenges. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 9–16. [Google Scholar] [CrossRef] [Green Version]
  24. Shih, Y.; Lai, W.-S.; Liang, C.-K. Distortion-free wide-angle portraits on camera phones. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
  25. Kanan, C.; Cottrell, G.W. Color-to-grayscale: Does the method matter in image recognition? PLoS ONE 2012, 7, e29740. [Google Scholar] [CrossRef] [Green Version]
  26. Boubezari, R.; Le Minh, H.; Ghassemlooy, Z.; Bouridane, A. Smartphone camera based visible light communication. J. Lightwave Technol. 2016, 34, 4121–4127. [Google Scholar] [CrossRef]
  27. DiMauro, G.; Di Pierro, D.; Maglietta, R.; Reno, V.; Caivano, D.; Gelardi, M. RhinoSmart: A smartphone based system for rhino-cell segmentation. In Proceedings of the 2020 5th International Conference on Smart and Sustainable Technologies (SpliTech), Split, Croatia, 23–26 September 2020; pp. 1–6. [Google Scholar]
  28. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  29. Patel, M.S.; Patel, N.M.; Holia, M.S. Feature based multi-view image registration using SURF. In Proceedings of the 2015 International Symposium on Advanced Computing and Communication (ISACC), Silchar, India, 14–15 September 2015; pp. 213–218. [Google Scholar]
  30. Yuan, M.; Khan, I.R.; Farbiz, F.; Yao, S.; Niswar, A.; Foo, M.-H. A Mixed Reality Virtual Clothes Try-On System. IEEE Trans. Multimed. 2013, 15, 1958–1968. [Google Scholar] [CrossRef]
  31. Derpanis, K.G. Overview of the RANSAC Algorithm. Image Rochester NY 2010, 4, 2–3. [Google Scholar]
  32. Juan, L.; Oubong, G. SURF applied in panorama image stitching. In Proceedings of the 2010 2nd International Conference on Image Processing Theory, Tools and Applications, Paris, France, 7–10 July 2010; pp. 495–499. [Google Scholar]
  33. Savva, S.; Tornaritis, M.; Savva, M.; Kourides, Y.; Panagi, A.; Silikiotou, N.; Georgiou, C.; Kafatos, A. Waist circumference and waist-to-height ratio are better predictors of cardiovascular disease risk factors in children than body mass index. Int. J. Obes. 2000, 24, 1453–1458. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Bogin, B.; Varela-Silva, M.I. Leg length, body proportion, and health: A review with a note on beauty. Int. J. Environ. Res. Public Health 2010, 7, 1047–1075. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. ISO-Waistline. Available online: https://www.iso.org/obp/ui/#iso:std:iso:8559:ed-1:v1:en (accessed on 26 April 2021).
  36. Veitch, D. Where is the human waist? Definitions, manual compared to scanner measurements. Work 2012, 41, 4018–4024. [Google Scholar] [CrossRef] [Green Version]
  37. Arnheim, R. A review of proportion. J. Aesthet. Art Crit. 1955, 14, 44–57. [Google Scholar] [CrossRef]
  38. Lee, K.; Choo, C.Y.; See, H.Q.; Tan, Z.J.; Lee, Y. Human detection using Histogram of oriented gradients and Human body ratio estimation. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; Volume 4, pp. 18–22. [Google Scholar]
  39. Wang, J.; Thornton, J.C.; Bari, S.; Williamson, B.; Gallagher, D.; Heymsfield, S.B.; Horlick, M.; Kotler, D.; Laferrère, B.; Mayer, L.; et al. Comparisons of waist circumferences measured at 4 sites. Am. J. Clin. Nutr. 2003, 77, 379–384. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. CAESAR. Available online: https://apps.dtic.mil/dtic/tr/fulltext/u2/a408374.pdf (accessed on 25 April 2021).
  41. Gilski, P.; Stefanski, J. Android os: A review. TEM J. 2015, 4, 116. [Google Scholar]
  42. Android, O. Available online: https://www.android.com/versions/nougat-7-0/ (accessed on 26 April 2021).
  43. Xiaohui, T.; Xiaoyu, P.; Liwen, L.; Qing, X. Automatic human body feature extraction and personal size measurement. J. Vis. Lang. Comput. 2018, 47, 9–18. [Google Scholar] [CrossRef]
  44. Apeagyei, P.R. Application of 3D body scanning technology to human measurement for clothing Fit. Int. J. Digit. Content Technol. Its Appl. 2010, 4, 58–68. [Google Scholar] [CrossRef] [Green Version]
  45. Hubert, M.; Blut, M.; Brock, C.; Backhaus, C.; Eberhardt, T. Acceptance of smartphone-based mobile shopping: Mobile benefits, customer characteristics, perceived risks, and the impact of application context. Psychol. Mark. 2017, 34, 175–194. [Google Scholar] [CrossRef] [Green Version]
  46. Groß, M. Exploring the acceptance of technology for mobile shopping: An empirical investigation among Smartphone users. Int. Rev. Retail Distrib. Consum. Res. 2015, 25, 215–235. [Google Scholar] [CrossRef]
Figure 1. Flowchart of our proposed algorithm.
Figure 1. Flowchart of our proposed algorithm.
Electronics 10 01338 g001
Figure 2. Image capture process with the smartphone from a (a) frontal and (b) lateral view. The subject is asked to capture the image with their head and toes touching the top and bottom of the camera view, respectively.
Figure 2. Image capture process with the smartphone from a (a) frontal and (b) lateral view. The subject is asked to capture the image with their head and toes touching the top and bottom of the camera view, respectively.
Electronics 10 01338 g002
Figure 3. RGB representation of the smartphone image. (a) Red, green, and blue planes of the RGB image. (b) Bit representation of the RGB image.
Figure 3. RGB representation of the smartphone image. (a) Red, green, and blue planes of the RGB image. (b) Bit representation of the RGB image.
Electronics 10 01338 g003
Figure 4. Otsu’s thresholding for image binarization. (a) Grayscale image, (b) image histogram, and (c) binarized image using an optimal threshold of T = 3.
Figure 4. Otsu’s thresholding for image binarization. (a) Grayscale image, (b) image histogram, and (c) binarized image using an optimal threshold of T = 3.
Electronics 10 01338 g004
Figure 5. Background removal process. (a) Background image, (b) image with the subject, and (c) the subject’s image with the background removed.
Figure 5. Background removal process. (a) Background image, (b) image with the subject, and (c) the subject’s image with the background removed.
Electronics 10 01338 g005
Figure 6. Background removal process. (a) Image registration using SURF feature points. (b) background registered onto subject’s image.
Figure 6. Background removal process. (a) Image registration using SURF feature points. (b) background registered onto subject’s image.
Electronics 10 01338 g006
Figure 7. Human body ratio (The waist, lower hip, and thigh areas are indicated).
Figure 7. Human body ratio (The waist, lower hip, and thigh areas are indicated).
Electronics 10 01338 g007
Figure 8. (a) Neural network model for estimating the heights of the waistlines of the users’ respective preferences and a regression curve obtained for prediction of the waistline using (b) a neural network and (c) polynomial fitting. Here, the red circles are the waistlines preferred by the individual subjects, and the blue and green curves are the regression curves, which were used to obtain the predicted waistline of the user’s own preference.
Figure 8. (a) Neural network model for estimating the heights of the waistlines of the users’ respective preferences and a regression curve obtained for prediction of the waistline using (b) a neural network and (c) polynomial fitting. Here, the red circles are the waistlines preferred by the individual subjects, and the blue and green curves are the regression curves, which were used to obtain the predicted waistline of the user’s own preference.
Electronics 10 01338 g008
Figure 9. The binarized image of a subject. The lower hip measurement is calculated by the area in black.
Figure 9. The binarized image of a subject. The lower hip measurement is calculated by the area in black.
Electronics 10 01338 g009
Figure 10. Approximated area of the measurements. The waist, lower hip and thigh areas are considered to be ellipses.
Figure 10. Approximated area of the measurements. The waist, lower hip and thigh areas are considered to be ellipses.
Electronics 10 01338 g010
Figure 11. 3D reconstruction from two 2D images of the user. (a) Frontal view. (b) Lateral view.
Figure 11. 3D reconstruction from two 2D images of the user. (a) Frontal view. (b) Lateral view.
Electronics 10 01338 g011
Figure 12. Steps for using the SmartFit Measurement application.
Figure 12. Steps for using the SmartFit Measurement application.
Electronics 10 01338 g012
Figure 13. Bar chart of the original and obtained measurements.
Figure 13. Bar chart of the original and obtained measurements.
Electronics 10 01338 g013
Table 1. Waist, lower hip, and thigh measurements for 12 subjects.
Table 1. Waist, lower hip, and thigh measurements for 12 subjects.
SubjectOriginal MeasurementObtained MeasurementError (%)
133.033.00.0
37.038.02.7
20.024.020.0
232.032.51.5
39.039.71.9
20.020.94.9
335.035.00.0
38.036.73.4
20.018.66.7
437.036.70.8
38.033.511.8
20.022.613.3
538.539.42.3
40.041.74.2
23.022.71.3
637.037.30.8
42.038.19.2
22.020.37.4
730.030.62.0
38.538.10.8
21.022.88.5
845.042.06.6
46.043.26.0
29.028.32.1
936.538.14.5
39.538.91.4
21.520.73.7
1038.039.74.4
39.539.01.2
20.521.86.3
1136.039.710.2
38.039.02.6
20.021.89.0
1234.036.98.5
38.037.61.0
21.021.31.5
Table 2. Comparison of the proposed method with existing methods for body size measurements.
Table 2. Comparison of the proposed method with existing methods for body size measurements.
Xiaohe et al. [43]Apeagyei et al. [44]Proposed Method
Feature3D reconstruction-based focal point3D reconstructionMeasurement form 2D image
ComplexityComplexComplexSimple
ImplementationVirtual try on machineCamera arraySmartphone
Accuracy95.72%-95.59%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Foysal, K.H.; Chang, H.-J.; Bruess, F.; Chong, J.-W. Body Size Measurement Using a Smartphone. Electronics 2021, 10, 1338. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10111338

AMA Style

Foysal KH, Chang H-J, Bruess F, Chong J-W. Body Size Measurement Using a Smartphone. Electronics. 2021; 10(11):1338. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10111338

Chicago/Turabian Style

Foysal, Kamrul Hasan, Hyo-Jung (Julie) Chang, Francine Bruess, and Jo-Woon Chong. 2021. "Body Size Measurement Using a Smartphone" Electronics 10, no. 11: 1338. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10111338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop