Next Article in Journal
RNGSGLR: Generalization of the Context-Aware Scanning Architecture for All Character-Level Context-Free Languages
Next Article in Special Issue
An Effective Hybrid-Energy Framework for Grid Vulnerability Alleviation under Cyber-Stealthy Intrusions
Previous Article in Journal
A Novel Chaos-Based Cryptography Algorithm and Its Performance Analysis
Previous Article in Special Issue
A Multi-Agent Approach for Self-Healing and RES-Penetration in Smart Distribution Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Optimizing the System Reliability Using Bounded Geometric Programming Approach

1
Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
2
Department of Management Studies, Indian Institute of Science, Bangalore 560012, India
3
Department of Statistics and Operations Research, Aligarh Muslim University, Aligarh 202002, India
4
School of Science, College of Sciences, Technology, Engineering, Mathematics, RMIT University, GPO Box 2476, Melbourne, VIC 3001, Australia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 23 June 2022 / Revised: 6 July 2022 / Accepted: 11 July 2022 / Published: 13 July 2022
(This article belongs to the Special Issue Advances in Reliability Modeling, Optimization and Applications)

Abstract

:
The geometric programming problem (GPP) is a beneficial mathematical programming problem for modeling and optimizing nonlinear optimization problems in various engineering fields. The structural configuration of the GPP is quite dynamic and flexible in modeling and fitting the reliability optimization problems efficiently. The work’s motivation is to introduce a bounded solution approach for the GPP while considering the variation among the right-hand-side parameters. The bounded solution method uses the two-level mathematical programming problems and obtains the solution of the objective function in a specified interval. The benefit of the bounded solution approach can be realized in that there is no need for sensitivity analyses of the results output. The demonstration of the proposed approach is shown by applying it to the system reliability optimization problem. The specific interval is determined for the objective values and found to be lying in the optimal range. Based on the findings, the concluding remarks are presented.

1. Introduction

Mathematical programming problems have different forms based on the nature of objective functions and constraints. The geometric programming problem, a typical form of mathematical optimization characterized by objective and constraint functions of a particular form, was introduced by [1]. Later, the advanced study in the domain of the GPP was performed by [2,3]. Several engineering applications [4] have investigated the effectiveness and importance of the GPP. The GP optimization approach inevitably outperforms other existing techniques due to the objective function’s relative magnitudes instead of the decision variables. Initially, the GP technique’s basic working principle is based on finding the optimal solution of the objective function and then proceeding further to determine the optimal values of the design variables. This characteristic feature of the GPP is essential and fruitful in circumstances where the decision-makers are interested in first finding the optimal values of the objective function. Thus, the polynomial structure of the objectives and constraints leads the GPP towards the simpler convex solution space [2,4]. GP optimization techniques can tackle this situation, and the computational activities are aborted to obtain the optimum design vectors. One of the GP techniques’ most crucial advantages over others can be regarded as it mitigates the complex optimization problems into the different piecewise linear algebraic equations. On the other hand, the GP approach deals only with the posynomial types of algebraic terms, meaning that it solely facilitates the objective function and the constraints with posynomial structures, which can be considered significant drawbacks.
Many engineering optimization problems deal with the manufacturing and production processes of some products, machinery parts, and the raw equipment used in the final usable machines and products. They care about the structure, dimensions, quality, and specifications of the raw material parts that are very important to be transformed and converted into usable products. For example, the cofferdam, shaft, journal bearings, etc., are the raw parts that are the building block raw parts of the various products and machines. Hence, the mathematical models with the specifications of these raw parts are built up and further used in manufacturing and producing the final products. Sometimes, the perfect specifications cannot be achieved due to some vagueness or technical errors in the functioning machine, for which the experts/managers allow some marginal variations among the specifications and dimensions of such raw parts. Afterward, it can be managed or adjusted to some extent. In system reliability modeling, various parameters can be taken as varying between some specified intervals. This means that the parameters can be taken as uncertain, and using some specified tools, they can be converted into crisp ones. In the literature, the concept of fuzzy and random parameters is available, which deals with the vagueness and randomness in the parameters. However, we have provided an opportunity to define the parameters under the continuous variations bounded by upper and lower limits. Instead of taking the vague and random parameters, one can assume the continuous variations in the parameters’ values can be tackled with the two-level mathematical programming techniques discussed in this paper. Additionally, the sensitivity and post-optimality of the obtained solution results are waived off due to the working procedure of the proposed approach. Hence, the proposed bonded approach for the GPP can be easily implemented on various non-engineering problems while dealing with varying parameters.
The remaining part of the paper is summarized as follows: In Section 2, some relevant literature is discussed, while Section 3 presents the basic concepts and modeling of standard geometric programming problems along with the proposed bounded solution methods. The computational study is presented with a particular focus on system reliability optimization in Section 4. Analyses of the computational complexity are also performed with other existing approaches. Finally, conclusions and the future scope are discussed based on the present work in Section 5.

2. Literature Review

The GPP is a relatively new method of solving nonlinear programming problems. It is used to minimize functions in the form of posynomials subject to constraints of the same type. Practical algorithms have been developed for solving geometric programming problems [1,2,3]. Liu [5] proposed the posynomial GPP subject to fuzzy relation inequalities. In 2018, Lu and Liu [6] also studied a class of posynomial GPPs that considers the evaluation of a posynomial GPP subject to fuzzy relational equations with max–min composition. Ahmad and Adhami [7] also addressed the interval-based solution approach for solving transportation problems under varying input parameters. Chakraborty et al. [8] discussed the multiobjective GPP with the aid of fuzzy geometry. Garg et al. [9] presented the reliability optimization problem under an intuitionistic environment. Islam and Roy [10] investigated the modified GPP and applied it to many engineering problems. Islam and Roy [11] developed a new multiobjective GP model and used it to solve the transportation problem. Recently, a interesting study on GPP was presented by [12,13,14,15]. Khorsandi et al. [16] developed a new optimization technique for GPP. Mahapatra and Roy [17] also solved the reliability of a system using the GPP approach.
However, the geometric programming research approach in the field of reliability optimization is being performed in the context of mathematical modeling and real-life applications. Some recent work is also available on the system reliability, ensuring a significant contribution to the literature. Negi et al. [18] presented a hybrid optimizer model for system reliability. Roustaee and Kazemi [19] developed a stochastic model for multi-microgrid constrained reliability system and applied it to clean energy management. Zolfaghari and Mousavi [20] proposed an integrated system reliability model for the inbuilt component under uncertainty. Sedaghat and Ardakan [21] developed a novel computational strategy for redundant components in the system reliability optimization. Meng et al. [22] discussed the interval parameters by the sequential moving asymptote method for the system reliability based on the integrated co-efficient approach. Kugele et al. [23] presented a research work by integrating the second-degree difficulty in carbon ejection controlled, reliable, innovative production management and implemented it on a computational dataset. Son et al. [24] used the modeling texture of the GPP in the levelized cost of energy-oriented modular string inverter design and discussed it in the field of PV generation systems. Shen et al. [25] also introduced a novel method for energy-efficient ultrareliability using the outage probability bound and the GPP technique. Rajamony et al. [26] designed multi-objective single-phase differential buck inverters by considering an active power decoupling and applied it to power generation. Singh and Singh [27] suggested the geometric programming approach for optimizing multi-VM migration by allocating transfer and compression.
All the studies are confined to either fuzzy- or stochastic-based approaches, but it may possible that input parameters may vary within some specified intervals bounded by upper and lower bounds. In this situation, the fuzzy and stochastic approaches may not be applied successfully. Thus, to overcome this issue, we developed a bounded solution method comprising the two-level GPP, and the values of the objective function are obtained directly. Hence, the present study lays down a new direction for obtaining the optimal solution under the varying parameters. The proposed method is applied to system reliability optimization problems and yields a result without affecting the system reliability under variations.

3. Geometric Programming Problem: Basic Concepts

In this sub-section, we discuss some important basic concepts related to geometric programming problems.

3.1. Basic Concepts

Definition 1
(Monomial). The word “monomial” is derived from the Latin word mono, meaning only one, and mial solely means term. Therefore, a monomial literally means “an expression in algebra having only one term”.
Thus, if x 1 , x 2 , , x n represent the n non-negative variable, then a real-valued function F of x, in the following form
F ( x ) = c x 1 a 1 x 2 a 2 x n a n ,
where c > 0 and a i R , is known as the monomial function.
Illustrative Example 1: If a, b, and c are non-negative variables, then 6, 0.84, 9 a 3 b 9 , 17 c / a are monomial, but 5 + a , 6 a 8 c , and 7 ( a + 8 a 6 b 7 ) are not monomials.
Definition 2
(Polynomial). The word “polynomial” is also derived from the Latin word poly, meaning many, and mial solely means term. Therefore, a polynomial literally means “an expression in algebra having many terms”, i.e., many monomials.
Suppose x 1 , x 2 , , x n represent the n non-negative variable, then the sum of one or more monomials in the following form of a real-valued function F of x:
F ( x ) = i = 1 m c i x 1 a 1 i x 2 a 2 i x n a n i ,
where a n i R is known as a polynomial function or simply a polynomial.
Illustrative Example 2: If a, b, and c are non-negative variables, then 6, 0.84, 9 a 3 b 9 , 5 c / a , 6 a 8 c , and 7 ( a + 8 a 6 b 7 ) are polynomials.
Definition 3
(Posynomial). If the coefficients c i > 0 in the polynomial, then it is called a posynomial. Therefore, the sum of one or more monomials in the following form of a real-valued function F of x:
F ( x ) = i = 1 m c i x 1 a 1 i x 2 a 2 i x n a n i , c i > 0
where c i > 0 and a n i R is called a posynomial function or simply posynomial.
Illustrative Example 3: If a, b, and c are non-negative variables, then 6, 0.84, 9 a 3 b 9 , 17 c / a + a 7 b 4 are posynomial, but 5 a , 6 a 8 c , and 7 ( a + 8 a 6 b 7 ) are not posynomial.
Note 1: The term posynomial is used to suggest a combination of positive and polynomial, that is POSITIVE + POLYNOMIAL = POSYNOMIAL.
Definition 4
(Degree of difficulty). This is defined as a quantity ( N n 1 ) present in geometric programming called the degree of difficulty. In the case of a constrained geometric programming problem, N represents the total number of terms in all the posynomials and n represents the number of design variables.
Note 2: The comparison and differences between monomial, polynomial, and posynomial are summarized in Table 1.

3.2. Geometric Programming Problems

Geometric programming problems fall under a class of nonlinear programming problems characterized by objective and constraint functions in a special form. The texture of GPP is quite different from other mathematical programming problems and depends on the characterization of decision variables in its product form. Thus, the modeling structure of different engineering problems inevitably adheres to the form of the GPP while optimizing the real-life problems. It is introduced for the solution of the algebraic nonlinear programming problems under the linear or nonlinear constraints, used to solve dynamic optimization problems. The useful impact in the area can be realized by its enormous application in integrated circuit design, manufacturing system design, and project management. Therefore, the standard form of GPP formulations can be represented as follows (1):
F = Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k j = 1 n x j β i k j 1 , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
where l 0 is the number of terms present in the objective function, while the inequality constraints include l i terms for i = 1 , 2 , , m . Geometric programming problems have a strong duality theorem, and hence, geometric programming problems with enormously nonlinear constraints can be depicted correspondingly as one with only linear constraints. Moreover, if the primal problem is in the form of a posynomial, then a global solution of a minimization-type problem can be determined by solving its dual maximization-type problem. The dual problem contains the desirable characteristics of being linearly constrained and with an objective function having wholesome features. This leads towards the development of the most promising solution methods for the geometric programming problems.
Assume that we replace the right-hand-side term (RHS) of the constraints in the GPP (1). Then, the modified GPP can be given as follows (2):
Primal F = Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j = F 0 ( x ) ( s a y ) s . x . k = 1 l i c i k j = 1 n x j β i k j B i , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
where B i , i = 1 , 2 , , m are non-negative numbers. If B i = 1 , i , then this modified geometric programming problem (2) is the standard geometric programming problem (1).
Consider that the geometric programming problem (2) is the primal problem, then its dual problem can be presented in the geometric programming problem (4). For this purpose, we formulate an auxiliary geometric programming problem (3) by dividing the constraint co-efficient by its RHS value B i , which can be depicted as follows:
F = Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k B i j = 1 n x j β i k j 1 , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
The derivation for the dual formulation of the geometric programming problem (2) can be carried out using the concept of [1,2,3]. Furthermore, the potential complexity in obtaining and solving the dual geometric programming problem (4) can be realized by the research work in [6,16]. Thus, the dual formulation of the geometric programming problem (2) is presented in the geometric programming problem (4).
Dual F = Maximize y k = 1 l 0 c 0 k y 0 k y 0 k i = 1 m k = 1 l i c i k y i 0 B i y i k y i k = F ( y ) ( s a y ) s . x . k = 1 l 0 y 0 k = 1 , i = 1 , 2 , , m , k = 1 l 0 α 0 k j y 0 k + i = 1 m k = 1 l i β i k j y i k = 0 , j = 1 , 2 , , n , y i k 0 , i , k .
Theorem 1.
If δ is a feasible vector for the constraint posynomial geometric programming (2), then F 0 ( x ) n F ( y ) n .
Proof. 
The expression for F 0 ( x ) can be written as
F 0 ( x ) = i = 1 n k = 1 T 0 y i k c 0 i k j = 1 m x i j α 0 i k j y i k
We can apply the weighted A . M . G . M . inequality to this new expression for F 0 ( x ) and obtain
c 0 i k j = 1 m x i j α 0 i k j i = 1 n k = 1 T 0 y i k i = 1 n k = 1 T 0 δ i k i = 1 n k = 1 T 0 c 0 i k j = 1 m x i j α 0 i k j y i k δ i k
or
F 0 ( x ) n n i = 1 n C 0 i k y i k y i k j = 1 m x i j k = 1 T 0 α 0 i k j y i k
using normality condition
= i = 1 n k = 1 T 0 C 0 i k y i k δ i k j = 1 m x i j k = 1 T 0 α 0 i k j y i k
Again, F r ( x ) can be written as
g r ( x ) = i = 1 n T r 1 + 1 T r y i k c r i k j = 1 m x i j α r i k j y i k
Applying the weighted A . M . G . M . inequality in (7), we have
F r ( x ) i = 1 n δ i k i = 1 n y i k i = 1 n k = T r 1 + 1 T r c r i k j = 1 m x i j α r i k j y i k y i k
and
F r ( x ) i = 1 n y i k i = 1 n k = T r 1 + 1 T r C r i k y i k y i k j = 1 m x i j k = T r 1 + 1 T r α r i k j y i k s = 1 n y s k y i k
( r = 1 , 2 , , l ) . Using 1 F r ( x ) i = 1 n y i k ( r = 1 , 2 , , l ) (since F r ( x ) 1 ( r = 1 , 2 , , l ) ),
1 i = 1 n k = T r 1 + 1 T r C r i k δ i k y i k s = 1 n y s k y i k j = 1 m x i j k = T r 1 + 1 T r α r i k j y i k
Multiplying (6) and (8), we have
F 0 ( x ) n n i = 1 n k = T r 1 + 1 T r C 0 i k y i k y i k s = 1 n y s k y i k j = 1 m x i j k = 1 T 0 α 0 i k j δ i k j = 1 m x i j k = 1 T 0 α 0 i k j y i k + r = 1 l k = T r 1 + 1 T r α r i k j y i k
( r = 0 , 1 , 2 , , l ) . Using orthogonality conditions, the inequality (9) becomes
F 0 ( x ) n n i = 1 n k = 1 T r C i k y i k δ i k s = 1 n y s k y i k , ( r = 0 , 1 , 2 , , l )
i.e., F 0 ( x ) n F ( y ) n . This completes the proof. □
Theorem 2.
Suppose that the constraint PGP (2) is super-consistent and that x * is a solution for GP. Then, the corresponding DP (4) is consistent and has a solution δ * that satisfies
F 0 ( x * ) = n F ( y * ) n
and
y i k * = u i k ( x * ) g 0 ( x * ) , ( i = 1 , 2 , , n ; k = 1 , 2 , , T 0 ) λ i r ( y * ) u i k ( x * ) ( i = 1 , 2 , , n ; k = T r 1 + 1 , 2 , , T r ; r = 1 , 2 , , l )
Proof. 
Since GP is super-consistent, so is the associated CGP. Furthermore, since GP has a solution x * = ( x i 1 * , x i 2 * , , x i j * ) , the associated GP has a solution p * = ( p i 1 * , p i 2 * , , p i j * ) given by p i j * = ln x i j * .
According to the Karush–Kuhn–Tucker (K-K-T) conditions, there is a vector λ * = ( λ i 1 * , , λ i l * ) such that
λ i r * 0
λ i r * ( h i r ( p * ) 1 ) = 0
h i 0 ( p * ) p i j + r = 1 l λ i r * h i r ( p * ) p i j = 0
Because x i j = e p i j for i = 1 , 2 , , n , j = 1 , 2 , , m , it follows that r = 1 , 2 , , l
h i r ( p ) p i j = h i r ( p ) x i j x i j p i j = g i r ( p ) x i j e p i j
Therefore, the condition (12) is equivalent to
h i 0 ( p * ) x i j + r = 1 l λ i r * h i r ( p * ) x i j = 0
since e p i j > 0 and x i j > 0 . Hence (13) is equivalent to
x i j * h i 0 ( p * ) x i j + r = 1 l λ i r * x i j * h i r ( p * ) x i j = 0
Now, the terms of F i r ( p ) are of the form
u i r ( p ) = c r i k i = 1 n x i j α r i k j
It is clear that
x i j * h i 0 ( p * ) x i j = k = T r 1 + 1 T r α r i k j , ( i = 1 , 2 , , n ; j = 1 , 2 , , n ; r = 1 , 2 , , l )
Therefore, (14) implies
k = 1 T r α 0 i k j u i k ( p * ) + r = 1 l k = T r 1 + 1 T r λ i r * u i k ( p * ) = 0 , ( i = 1 , 2 , , n ; j = 1 , 2 , , n )
If we divide the last equation by
F i 0 ( p * ) = k = 1 T 0 u i k p *
we obtain
k = 1 T r α 0 i k j u i k ( p * ) F i 0 ( p * ) + r = 1 l k = T r 1 + 1 T r λ i r * u i k ( x * ) F i 0 ( p * ) = 0
Define the vector y i k * by
y i k * = u i k ( p * ) F i 0 ( p * ) , ( i = 1 , 2 , , n ; k = 1 , 2 , , T 0 ) λ i r u i k ( p * ) F i 0 ( p * ) ( i = 1 , 2 , , n ; k = T r 1 + 1 , 2 , , T r ; r = 1 , 2 , , l )
Note that y i k * > 0 ( i = 1 , 2 , , n ; k = 1 , 2 , , T 0 ) and r 1 , either y i k * > 0 for all k with T r 1 + 1 k T r or y i k = 0 for all k with T r 1 + 1 k T r ; according to the corresponding Karush–Kuhn–Tucker multipliers y i r * , ( i = 1 , 2 , , n ; r = 1 , 2 , , l ) is positive or zero.
Furthermore, observe that vector y * satisfies all of the m exponent constraint equations in DP, as well as the constraint
k = 1 T r y i k * = k = 1 T 0 u i k ( p * ) F i 0 ( p * ) = F i 0 ( p * ) F i 0 ( p * ) = 1
Therefore, y * = ( y i 1 , , y i T 0 * ) is a feasible vector for DP. Hence DP is constrained.
The Karush–Kuhn–Tucker multipliers λ i r * are related to the corresponding λ i r ( y * ) DP as follows:
λ i r ( y * ) = k = 1 T r y i k * = k = 1 T 0 λ i r * u i k ( p * ) g i 0 ( p * ) = λ i r * F i 0 ( p * ) F i 0 ( p * ) , ( i = 1 , 2 , , n ; r = 1 , 2 , , l )
The Karush–Kuhn–Tucker condition (11) becomes
λ i r * ( F i r ( p * ) 1 ) = 0
Therefore, we obtain
λ i r * F i r ( p * ) = λ i r *
Therefore, for r = 1 , 2 , , l and k = T r 1 + 1 , , T r , we see that
y i k * = λ i r * u i k ( p * ) F i 0 ( p * ) = λ i r * F i r ( p * ) u i k ( p * ) F i 0 ( p * ) = λ i r ( y * ) u i k ( p * )
The fact that δ * is a feasible for DP and x * is a feasible for GP implies that
F 0 ( p * ) = n F ( y * ) n
because of the primal-dual inequality.
Moreover, the values of y i k * ( i = 1.2 . , n ; r = 1 , 2 , , l ; k = 1 , 2 , , T r 1 + 1 , , T r ) are precisely those that force equality in the arithmetic-geometric mean inequalities that where used to obtain the duality inequality. Finally, Equation (17) shows that either F i r ( p * ) = 1 or y i r * = 0 ( i = 1 , 2 , , n ; r = 1 , 2 , , l ) . This means that the value of y i k * actually forces equality in the primal-dual inequality. This completes the proof. □

3.3. Geometric Programming Problem under Varying Parameters

In reality, optimization problems may contain uncertainty among the parameters that cannot be ignored. Due to the existence of uncertainty among parameters in the real world, many researchers have investigated the problem of decision-making in a fuzzy environment and management science. Different real-life problems inherently involve uncertainty in the parameters’ values. In this case, the decision-makers are not able to provide fixed/exact values of the respective parameters. However, depending on some previous experience or knowledge, the decision-makers may furnish some estimated/most likely values of the parameters that lead to vagueness or ambiguousness. The inconsistent, inappropriate, inaccurate, indeterminate knowledge and lack of information result in vague and ambiguous situations. Thus, the parameters are not precise in such cases. Briefly, one can differentiate between stochastic and fuzzy techniques for tackling the uncertain parameters. Uncertainty arises due to randomness, which can be tackled by using stochastic techniques, whereas the fuzzy approaches can be applied when uncertainty arises due to vagueness.
Various interactive and effective algorithms are investigated for solving the GPP when the RHS in the constraint is known exactly. However, many applications of geometric programming are engineering design problems in which some of the deterministic parameters in the RHS are defined in an estimated interval of actual values. There are also many cases when the RHS may not be depicted in a precise manner. For example, in the machining economics model, the tool life may fluctuate due to different machining operations and conditions. In this proposed GPP, uncertainty present in the data us varying between some specified intervals that differ from both types of the above-discussed uncertainties. The mathematical model of the GPP under varying parameters can be represented as follows (18):
Proposed Model F = Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k j = 1 n x j β i k j B ˜ i , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
where B ˜ i [ B i , ̲ B i ¯ ] , i = 1 , 2 , , m . The geometric programming problem (18) represents the proposed geometric programming model under varying parameters ( B ˜ i ) that are allowed to vary between some specified bounded intervals, i.e., lower ( B i ̲ ) and upper ( B i ¯ ) bounds, respectively.

3.4. Proposed Bounded Solution Method for Geometric Programming Problem

Intuitively, when the input values are varying within some specified intervals, then it is obvious to have the varying or fluctuating output as well while solving the problems. Hence, the value of the objective function can be determined in a specified interval according to the varying parameters. In this paper, we developed a bounded solution scheme to obtain the lower and upper bound of the geometric programming problems under varying parameters. The GPP (18) inherently involves variation among the RHS parameters. The following consideration is taken into account while proposing the bounded solution method.
Suppose that S = { B ˜ i | B i ̲ B ˜ i B i ¯ , i = 1 , 2 , , m } is a set of varying parameters defined between the fixed intervals. Now, for each B ˜ i S , we define F ˜ ( B ˜ i ) as the objective function value of geometric programming problem (18) under the set of given constraints. Assume that F ̲ a n d F ¯ is the minimum and maximum value of F ˜ ( B ˜ i ) defined on S, respectively. Therefore, mathematically, it can be expressed as follows:
F ̲ = Minimum { F ˜ ( B ˜ i ) | B ˜ i S }
F ¯ = Maximum { F ˜ ( B ˜ i ) | B ˜ i S }
With the aid of Equations (19) and (20), we can elicit the corresponding pair of two-level mathematical programming problems as follows:
F ̲ = Minimize ( B ˜ i ) S Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k j = 1 n x j β i k j B ˜ i , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
and
F ¯ = Maximize ( B ˜ i ) S Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k j = 1 n x j β i k j B ˜ i , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n .
The above problems (21) and (22) represent the two-level geometric programming problems under varying parameters. Since Problem (21) reveals the minimum of the best possible values on S, it would be justifiable to insert the constraints of the outer level into the inner level to simplify the two-level mathematical programming problems into the single-level mathematical programming problem, which can be presented as follows (23):
F ̲ = Minimize x k = 1 l 0 c 0 k j = 1 n x j α 0 k j s . x . k = 1 l i c i k B ˜ i j = 1 n x j β i k j 1 , i = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n , B i ̲ B ˜ i B i ¯ , B ˜ i [ B i , ̲ B i ¯ ] , i = 1 , 2 , , m .
However, in Problem (23), the value of x j is not known. Thus, it is necessary to obtain the dual of Problem (23), which can be stated as follows (24):
Model A F ̲ = Maximize y k = 1 l 0 c 0 k y 0 k y 0 k i = 1 m k = 1 l 0 c i k y i 0 B ¯ i y i k y i k s . x . k = 1 l i y 0 k = 1 , k = 1 l i α 0 k j y 0 k + i = 1 m k = 1 l i β i k j y i k = 0 , y i k 0 , k = 1 , 2 , , l 0 , B i ̲ B ˜ i B i ¯ , B ˜ i [ B i , ̲ B i ¯ ] , i = 1 , 2 , , m .
Finally, Model A is a nonlinear programming problem and can be solved by using some optimizing software.
The problem (22) would give the maximum value among the best possible objective values over all decision variables. In order to find the upper bound of the geometric programming problem, the dual of the inner problem of Problem (22) must be obtained with the fact that in the geometric programming problem, the primal problem and the dual problem have the same objective value. By using the strong duality theory of the geometric programming problem, the dual of inner problem (22) is transformed into a maximization-type problem to be similar to the maximization type of outer problem (22). Hence, the problem (22) can be re-expressed as follows:
F ¯ = Maximize ( B ˜ i ) S Maximize y k = 1 l 0 c 0 k y 0 k y 0 k i = 1 m k = 1 l i c i k y i 0 B ˜ i y i k y i k s . x . k = 1 l 0 y 0 k = 1 , i = 1 , 2 , , m , k = 1 l 0 α 0 k j y 0 k + i = 1 m k = 1 l i β i k j y i k = 0 , j = 1 , 2 , , n , y i k 0 , i , k , B ˜ i [ B i , ̲ B i ¯ ] , i = 1 , 2 , , m .
Since Problem (25) represents the maximum of the best possible values on S, so it would be justifiable to insert the constraints of the outer level into the inner level to simplify the two-level mathematical programming problems into the single-level mathematical programming problem (26), which can be stated as follows:
Model B F ¯ = Maximize y k = 1 l 0 c 0 k y 0 k y 0 k i = 1 m k = 1 l i c i k y i 0 B ̲ i y i k y i k s . x . k = 1 l 0 y 0 k = 1 , i = 1 , 2 , , m , k = 1 l 0 α 0 k j y 0 k + i = 1 m k = 1 l i β i k j y i k = 0 , j = 1 , 2 , , n , y i k 0 , i , k , B i ̲ B ˜ i B i ¯ , B ˜ i [ B i , ̲ B i ¯ ] , i = 1 , 2 , , m .
Model B is a nonlinear constrained programming problem and can be solved by using several efficient methods. Thus, Model A and Model B provide the lower and upper bound to the geometric programming problem under varying parameters and calculate the objective value directly without violating the optimal range of the objective values where they should lie. A comprehensive study about the relationship between the globally optimal cost and the optimal dual value can be found in [4].

4. Computational Study

The proposed bounded solution method for the geometric programming problem under varying parameters was implemented in different real-life applications. The following two examples were adopted from engineering problems. Furthermore, it was also applied to the system reliability optimization problem. All the numerical illustrations were coded in AMPL and solved using the optimizing solver CONOPT through the NEOS server version 5.0 on-line facility provided by Wisconsin Institutes for Discovery at the University of Wisconsin in Madison for solving optimization problems; see (Server [28]).
Example 1
([4]). A cofferdam is an engineering design optimization problem. It is a prominent structure to attach a trivial submerged area to permit building a permanent structure on an allocated site. The cofferdam function is elicited in a random environment by transitions in surrounding water levels. The architecture designs a dam of height x 1 , length x 2 breadth x 3 , and total required perimeter x 4 and intends to estimate the most promising total cost for making decisions. The RHS parameters can be of any simplex dimensions such as area, volume, etc., which is not quite certain. These are no longer crisp or deterministic, but the allowable lower and upper bounds over each area/volume are determined in the closed interval. Thus, the use of varying parameters is quite worthwhile and the decision under such variation will be helpful in determining the range of optimal outcomes. Figure 1 depicts an illustrative example of a cofferdam.
Thus, the equivalent mathematical programming problem with varying parameters is given as follows (27):
F = Minimize x 2 x 1 0.9 x 2 1.5 x 3 1 x 4 1.6 + 4 x 1 1 x 2 1 x 3 0.1 x 4 1 s . x . 2 x 1 2 x 2 1 x 3 2 x 4 + 1.6 x 1 x 3 x 4 2 B ˜ 1 1.9 x 1 2 x 2 1.4 x 3 x 4 + 3.1 x 1 2.2 x 4 B ˜ 2 x 1 , x 2 , x 3 , x 4 0 .
where B ˜ 1 ( 3 , 3.2 ) and B ˜ 2 ( 2 , 2.4 ) are the varying parameters. Since all the parameters are crisp except the RHS, then Problems (23) and (26) can be utilized to obtain the upper and lower bounds of the objective values in Problem (27). According to Problems (23) and (26), the formulations of the upper and lower bounds for Problem (27) can be presented as follows:
Model A F ̲ = Minimize y 2 y 01 y 01 4 y 02 y 02 2 y 10 B ˜ 1 y 11 y 11 1.6 y 10 B ˜ 1 y 12 y 12 1.9 y 20 B ˜ 2 y 21 y 21 3.1 y 20 B ˜ 2 y 22 y 22 s . x . y 01 + y 02 = 1 , y 01 + y 02 2 y 11 + y 12 + 2 y 21 + 2.2 y 22 = 0 , y 01 y 02 y 11 + 1.4 y 21 = 0 , y 01 + y 02 + 2 y 11 + y 12 + y 21 = 0 , y 01 y 02 + y 11 + 2 y 12 + y 21 + y 22 = 0 , y 01 , y 02 , y 11 , y 21 , y 12 , y 22 0 3 B ˜ 1 3.2 , 2 B ˜ 2 2.4 , i = 1 , 2 , , m .
and
Model B F ¯ = Maximize y 2 y 01 y 01 4 y 02 y 02 2 y 10 B ˜ 1 y 11 y 11 1.6 y 10 B ˜ 1 y 12 y 12 1.9 y 20 B ˜ 2 y 21 y 21 3.1 y 20 B ˜ 2 y 22 y 22 s . x . y 01 + y 02 = 1 , y 01 + y 02 2 y 11 + y 12 + 2 y 21 + 2.2 y 22 = 0 , y 01 y 02 y 11 + 1.4 y 21 = 0 , y 01 + y 02 + 2 y 11 + y 12 + y 21 = 0 , y 01 y 02 + y 11 + 2 y 12 + y 21 + y 22 = 0 , y 01 , y 02 , y 11 , y 21 , y 12 , y 22 0 3 B ˜ 1 3.2 , 2 B ˜ 2 2.4 , i = 1 , 2 , , m .
Thus, Problems (28) and (29) are the required upper and lower bounds for the geometric programming problem (27). Upon solving the problem at zero degree of difficulty, the upper and lower bounds for the objective functions are obtained as F ¯ = 8.5429 and F ̲ = 4.9271 , respectively. However, the objective values at B ˜ 1 = 3.2 and B ˜ 2 = 2.4 are found to be F ˜ = 5.6212 . Therefore, the obtain objective function lies between the range of the upper and lower bounds, which shows that it is justified to reduce the objective values at the maximum RHS under variations.
Example 2
([4]). This illustration belongs to a design problem of a journal bearing. The texture of the journal bearing is an inverse problem, where the eccentricity ratio and attitude angle are obtained for a defined load and speed. The engineers may not have experience in modeling the structure of this new type of journal bearing. The volume of steel, the thickness of the intermediate layer and nickel barrier, and the dimension of the plated overlay of the journal bearing are assumed to be unknown. Thus, the values of these parameters have been depicted between some specified closed intervals and taken in the form of lower and upper bounds, respectively. Hence, the varying solution outcomes will also come by ensuring the optimal objectives between corresponding intervals. Figure 2 represents the structure of the journal bearing used in this example. Hence, some parameters of the model are approximately known and are estimated by the engineers. Suppose that x 1 is the radial clearance, x 2 the fluid force, x 3 the diameter, x 4 the rotation speed, and x 5 the length-to-diameter ratio.
The following mathematical programming formulation can depict the design problem as a geometric programming problem (30):
F = Minimize x 0.5 x 1 2 x 2 x 4 x 5 + 1.1 x 1 1 x 2 1 x 3 1 s . x . 8.4 x 1 x 2 1 x 3 1 x 4 1 x 5 B ˜ 1 0.5 x 2 x 3 + x 1 x 4 1 x 5 1 + 1.6 x 3 x 4 B ˜ 2 x 1 , x 2 , x 3 , x 4 , x 5 0 .
where B ˜ 1 ( 4 , 4.2 ) and B ˜ 2 ( 0 , 1 ) are the varying parameters. Since all the parameters are crisp except the RHS, then Problems (23) and (26) can be utilized to obtain the upper and lower bound of the objective values in Problem (30). According to the problems (23) and (26), the formulations of the upper and lower bounds for Problem (30) can be presented as follows (31):
Model A F ̲ = Minimize y 0.5 y 01 y 01 1.1 y 02 y 02 8.4 B ˜ 1 y 11 8.4 B ˜ 2 y 11 0.5 y 20 y 21 y 21 y 20 y 22 y 22 1.6 y 20 y 23 y 23 s . x . y 01 + y 02 = 1 , 2 y 01 y 02 + y 11 + y 22 = 0 , y 01 y 02 y 11 + y 21 = 0 , y 02 y 11 + y 21 + y 23 = 0 , y 01 y 11 y 22 + y 23 = 0 , y 01 + y 11 y 22 = 0 , y 01 , y 02 , y 11 , y 21 , y 22 , y 23 0 4 B ˜ 1 4.2 , 0 B ˜ 2 1 , i = 1 , 2 , , m .
where y 21 + y 22 + y 23 = y 20 and the upper bound can be stated as follows (32):
Model B F ¯ = Maximize y 0.5 y 01 y 01 1.1 y 02 y 02 8.4 B ˜ 1 y 11 8.4 B ˜ 2 y 11 0.5 y 20 y 21 y 21 y 20 y 22 y 22 1.6 y 20 y 23 y 23 s . x . y 01 + y 02 = 1 , 2 y 01 y 02 + y 11 + y 22 = 0 , y 01 y 02 y 11 + y 21 = 0 , y 02 y 11 + y 21 + y 23 = 0 , y 01 y 11 y 22 + y 23 = 0 , y 01 + y 11 y 22 = 0 , y 01 , y 02 , y 11 , y 21 , y 22 , y 23 0 4 B ˜ 1 4.2 , 0 B ˜ 2 1 , i = 1 , 2 , , m .
where y 21 + y 22 + y 23 = y 20 .
The above Problems (31) and (32) provide the required upper and lower bound for the problem (30). Both of these problems are concave programming problems with linear constraints. Upon solving the problem at zero degree of difficulty, the upper and lower bounds for the objective functions are obtained as F ¯ = 4.314 and F ̲ = 3.045 , respectively. However, the objective values at B ˜ 1 = 4.2 and B ˜ 2 = 1 are found to be F ˜ = 3.561 . Therefore, the obtain objective function lies between the range of the upper and lower bounds, which shows that it is justified to reduce the objective values at the maximum RHS under variations.

4.1. Application to System Reliability Optimization

Assume system reliability having n components connected in series. Suppose r i ( i = 1 , 2 , , n ) represents the individual reliability of the i-th component of the system. Similarly, R s ( r 1 , r 2 , , r n ) is the reliability of the whole series system. Consequently, C s ( r 1 , r 2 , , r n ) depicts the total cost of n components associated with the system reliability. It seldom happens that the system reliability is maximized when the cost of the associated system is exactly known; however, some varying cost may make it easier to execute the smooth function of the framework. The obtained lower and upper bounds on the cost objective function will ensure the variation in total cost associated with the system and help with allocating the budget for maintenance or renovation, etc. In the same manner, minimizing the system cost under the varying reliability of the whole system would be quite a worthwhile task. The minimization of the system cost without affecting the system reliability is much needed to ensure the longer performance of the components. Thus, we considered that the system reliability is varying between some specified intervals and bounded by upper and lower bounds. This situation is quite common due to uncertainty in the failure of any components. In real-life scenarios, the minimization of the total system cost by maintaining the system reliability would be a more prominent modeling texture of the reliability optimization problems (see [17,29,30]). Therefore, the mathematical model for the minimization of system cost under varying system reliability takes the form of the geometric programming problem and can be represented as follows (33):
F = Minimize C s ( r 1 , r 2 , , r n ) = + i = 1 n C i r i α i s . x . ( r 1 × r 2 × × r n ) = i = 1 n r i R ˜ s R ̲ s R ˜ s R ¯ s 0 r i 1 i = 1 , 2 , , n .
where α i is the acceptable tolerance linked with the i-th component. We considered the three components connected in series, and the relevant data are summarized in Table 2.
Since all the parameters are crisp except the system reliability, then Problems (23) and (26) can be utilized to obtain the upper and lower bounds of the objective values in Problem (33). According to the problems (23) and (26), the formulations of the upper and lower bounds for Problem (33) can be presented as follows (34):
Model A F ̲ = Minimize y 150 y 01 y 01 210 y 02 y 02 270 y 03 y 03 1 R ˜ s y 11 y 11 1 R ˜ s y 12 y 12 1 R ˜ s y 13 y 13 s . x . y 01 + y 02 + y 03 = 1 , 20 y 01 + 15 y 02 + 10 y 03 + y 11 + y 12 + y 13 = 0 , y 01 y 02 y 03 + y 11 + y 12 + y 13 = 0 , y 01 , y 02 , y 03 , y 11 , y 12 , y 13 0 0.6358 R ˜ s 0.9776 , i = 1 , 2 , 3 .
whereas the upper bound can be stated as follows (35):
Model B F ¯ = Maximize y 150 y 01 y 01 210 y 02 y 02 270 y 03 y 03 1 R ˜ s y 11 y 11 1 R ˜ s y 12 y 12 1 R ˜ s y 13 y 13 s . x . y 01 + y 02 + y 03 = 1 , 20 y 01 + 15 y 02 + 10 y 03 + y 11 + y 12 + y 13 = 0 , y 01 y 02 y 03 + y 11 + y 12 + y 13 = 0 , y 01 , y 02 , y 03 , y 11 , y 12 , y 13 0 0.6358 R ˜ s 0.9776 , i = 1 , 2 , 3 .
The above problems (34) and (35) provide the required upper and lower bound for the problem (33). Upon solving the problem at zero degree of difficulty, the upper and lower bounds for the system cost are obtained as C s ¯ = 521.95 and C s ̲ = 216.35 , respectively. However, the system cost at R ˜ s = 0.88 is found to be C s ˜ = 351.29 . Therefore, the obtained system cost lies between the range of its upper and lower bounds, which shows that it is justified to reduce the system cost at maximum system reliability under variations.

4.2. Analyses of Computational Complexity and Discussions

This proposed bounded solution method captures the behavior of varying parameters and provides the interval-based solution of the objective function. Most often, uncertain parameters exist in any form, such as they may take the form of randomness, fuzziness, and any other aspects of uncertainty. The uncertainty among parameters arises due to vagueness being able to be dealt with by using fuzzy approaches, whereas the stochastic technique is applied when the uncertainty involves randomness among the parameters. More precisely, contrary to other uncertain optimization approaches, the developed approach adheres to comparatively less computational complexity in the sense of mathematical computation (e.g., some mathematical calculations are used to derive the crisp or deterministic version of fuzzy or random parameters), and there is no scope for obtaining the deterministic version of the problem for such varying geometric programming problems. The beauty of the proposed method can be highlighted by the fact that sensitivity analyses (post-optimality analysis) doe not need to be performed because the continuous variations among the parametric values directly produce the range of optimal objective functions from the interval parameters. Thus, the propounded solution approach can be the most prominent and efficient decision-maker while dealing with uncertain parameters other than the fuzzy or stochastic form.
The generalization of the conventional geometric programming problem of constant parameters is highlighted for interval parameters. The most prominent and extensive idea is to determine the lower and upper bounds of the range by applying the two-level mathematical programming technique to geometric programming problems. With the aid of a strong duality theorem, the two-level geometric programming problems are converted into a pair of one-level geometric programming problems to implement the computational study. When all the varying parameters degenerate to constant parameters, the two-level geometric programming problem turns into the conventional geometric programming problem. In general, in interval geometric programming problems, it may probably happen that the problem is infeasible for some specified range of varying parameters. Thus, our proposed methods are free from infeasibility and ignore those complexities due to infeasible values. The proposed method obtains the lower and upper bounds of the feasible solutions directly. In addition, the suggested method does not examine the range of values that results in infeasibility. Furthermore, developing two-level geometric programming problems can determine the lower and upper bounds of the objective values. However, mathematical programming is nonlinear in the case of geometric programming problems, which may be very typical for solving large-scale problems. The comparative study is presented in Table 3.
The presented work can be described as an empirical case research work by filling the various gaps [8,9,17,19,29] such as instant variation among parameters, two-level mathematical programming, duality theory in the GPP, and the automatic post-optimal analysis metric. In system reliability modeling, various parameters can be taken as varying between some specified intervals. For example, the cofferdam, shaft, journal bearings, etc., are the raw parts that are the building block materials of the various products and machines. This means that the parameters can be taken as uncertain, and using some specified tools, they can be converted into a crisp one. In the literature, the concept of fuzzy and random parameters is available, which deals with the vagueness and randomness in the parameters. However, we have provided an opportunity to define the parameters under the continuous variations bounded by upper and lower limits. Instead of taking the vague and random parameters, one can assume the continuous variations in the parameters’ values can be tackled with the two-level mathematical programming techniques discussed in this paper. Additionally, the sensitivity and post-optimality of the obtained solution results are waived off due to the working procedure of the proposed approach.
In the future, a solution method that involves all the parameters under variation in the geometric programming formulation is much required to ensure solvability. The values near the lower and upper bounds have a significantly lower probability of occurrence. If the distributions of varying data are known in the stochastic environment, then the distribution of the objective function would be obtained, which is more realistic, and the scenario is generated for consequent decision-making. Therefore, this lays down another direction for future research by deriving the distribution of the objective functions based on the distributions of the varying parameters.

5. Conclusions

The geometric programming problem is an integrated part of mathematical programming and has real-life applications in many engineering problems such as gravel-box design, bar–truss region texture, system reliability optimization, etc. The concept of varying parameters under the objective functions is discussed with the aim that uncertainty is critically involved and affects engineering problems’ formulations directly. The propounded research work is developed and introduces an interval-based solution approach to finding the upper and lower limits on the objective function of the varying parameters. The outer- and inner-level geometric programming problem is transformed into a single-level mathematical programming problem. The outcomes are summarized in the numerical illustrations and observed in the precise interval where they should exist. The system reliability optimization problem also provides evidence of the discussed problem’s successful implementation and dynamic solution results. The minimum system cost is obtained at the utmost system reliability, which also falls into the lower and upper bounds of the system costs.
The scope of usual sensitivity analysis is not further required due to the flexible nature of the proposed solution method. The propounded approach allows the abrupt fluctuations among the parameters between given intervals for which bounds over the objective functions are directly obtained. It also makes the computational algorithm easier than other methods by ignoring the uncertain parameters such as fuzzy, stochastic, and other uncertain forms that yield a solution procedure that is comparatively more complex. The developed approach may be extended for future research to stochastic programming, bi-level or multilevel programming, and various engineering problems with real-life applications.

Author Contributions

Conceptualization, A.E.S., F.A. and S.A.; methodology, M.A. and S.A.; formal analysis, M.A. and F.A.; writing—original draft preparation, M.A., S.A. and A.E.S.; writing—review and editing, S.A., A.E.S. and F.A.; project administration, S.A. and I.A.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to King Saud University for funding this work through the Researchers Supporting Project (RSP-2021/387), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very thankful to the Anonymous Reviewers and Editors for their insightful comments, which made the manuscript clearer and more readable. The authors extend their appreciation to King Saud University for funding this work through the Researchers Supporting Project (RSP-2021/387), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duffin, R.; Peterson, E.L. Duality theory for geometric programming. SIAM J. Appl. Math. 1966, 14, 1307–1349. [Google Scholar] [CrossRef]
  2. Duffin, R.J. Linearizing geometric programs. SIAM Rev. 1970, 12, 211–227. [Google Scholar] [CrossRef] [Green Version]
  3. Duffin, R.J.; Peterson, E.L. Reversed geometric programs treated by harmonic means. Indiana Univ. Math. J. 1972, 22, 531–550. [Google Scholar] [CrossRef]
  4. Rao, S.S. Engineering Optimization: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  5. Liu, S.T. Fuzzy measures for profit maximization with fuzzy parameters. J. Comput. Appl. Math. 2011, 236, 1333–1342. [Google Scholar] [CrossRef] [Green Version]
  6. Lu, T.; Liu, S.T. Fuzzy nonlinear programming approach to the evaluation of manufacturing processes. Eng. Appl. Artif. Intell. 2018, 72, 183–189. [Google Scholar] [CrossRef]
  7. Ahmad, F.; Adhami, A.Y. Total cost measures with probabilistic cost function under varying supply and demand in transportation problem. Opsearch 2019, 56, 583–602. [Google Scholar] [CrossRef]
  8. Chakraborty, D.; Chatterjee, A.; Aishwaryaprajna. Multi-objective Fuzzy Geometric Programming Problem Using Fuzzy Geometry. In Trends in Mathematics and Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2019; pp. 123–129. [Google Scholar]
  9. Garg, H.; Rani, M.; Sharma, S.; Vishwakarma, Y. Intuitionistic fuzzy optimization technique for solving multi-objective reliability optimization problems in interval environment. Expert Syst. Appl. 2014, 41, 3157–3167. [Google Scholar] [CrossRef]
  10. Islam, S.; Roy, T.K. Modified geometric programming problem and its applications. J. Appl. Math. Comput. 2005, 17, 121–144. [Google Scholar] [CrossRef]
  11. Islam, S.; Roy, T.K. A new fuzzy multi-objective programming: Entropy based geometric programming and its application of transportation problems. Eur. J. Oper. Res. 2006, 173, 387–404. [Google Scholar] [CrossRef]
  12. Islam, S.; Mandal, W.A. Preliminary Concepts of Geometric Programming (GP) Model. In Fuzzy Geometric Programming Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1–25. [Google Scholar]
  13. Islam, S.; Mandal, W.A. Geometric Programming Problem Under Uncertainty. In Fuzzy Geometric Programming Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 287–330. [Google Scholar]
  14. Islam, S.; Mandal, W.A. Fuzzy Unconstrained Geometric Programming Problem. In Fuzzy Geometric Programming Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 133–153. [Google Scholar]
  15. Islam, S.; Mandal, W.A. Intuitionistic and Neutrosophic Geometric Programming Problem. In Fuzzy Geometric Programming Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 331–355. [Google Scholar]
  16. Khorsandi, A.; Cao, B.Y.; Nasseri, H. A New Method to Optimize the Satisfaction Level of the Decision Maker in Fuzzy Geometric Programming Problems. Mathematics 2019, 7, 464. [Google Scholar] [CrossRef] [Green Version]
  17. Mahapatra, G.; Roy, T.K. Fuzzy multi-objective mathematical programming on reliability optimization model. Appl. Math. Comput. 2006, 174, 643–659. [Google Scholar] [CrossRef]
  18. Negi, G.; Kumar, A.; Pant, S.; Ram, M. Optimization of complex system reliability using hybrid grey wolf optimizer. Decis. Mak. Appl. Manag. Eng. 2021, 4, 241–256. [Google Scholar] [CrossRef]
  19. Roustaee, M.; Kazemi, A. Multi-objective stochastic operation of multi-microgrids constrained to system reliability and clean energy based on energy management system. Electr. Power Syst. Res. 2021, 194, 106970. [Google Scholar] [CrossRef]
  20. Zolfaghari, S.; Mousavi, S.M. A novel mathematical programming model for multi-mode project portfolio selection and scheduling with flexible resources and due dates under interval-valued fuzzy random uncertainty. Expert Syst. Appl. 2021, 182, 115207. [Google Scholar] [CrossRef]
  21. Sedaghat, N.; Ardakan, M.A. G-mixed: A new strategy for redundant components in reliability optimization problems. Reliab. Eng. Syst. Saf. 2021, 216, 107924. [Google Scholar] [CrossRef]
  22. Meng, Z.; Ren, S.; Wang, X.; Zhou, H. System reliability-based design optimization with interval parameters by sequential moving asymptote method. Struct. Multidiscip. Optim. 2021, 63, 1767–1788. [Google Scholar] [CrossRef]
  23. Kugele, A.S.H.; Ahmed, W.; Sarkar, B. Geometric programming solution of second degree difficulty for carbon ejection controlled reliable smart production system. RAIRO Oper. Res. 2022, 56, 1013–1029. [Google Scholar] [CrossRef]
  24. Son, Y.; Mukherjee, S.; Mallik, R.; Majmunovi’c, B.; Dutta, S.; Johnson, B.; Maksimović, D.; Seo, G.S. Levelized Cost of Energy-Oriented Modular String Inverter Design Optimization for PV Generation System Using Geometric Programming. IEEE Access 2022, 10, 27561–27578. [Google Scholar] [CrossRef]
  25. Shen, K.; Yu, W.; Chen, X.; Khosravirad, S.R. Energy Efficient HARQ for Ultrareliability via a Novel Outage Probability Bound and Geometric Programming. IEEE Trans. Wirel. Commun. 2022. [Google Scholar] [CrossRef]
  26. Rajamony, R.; Wang, S.; Navaratne, R.; Ming, W. Multi-objective design of single-phase differential buck inverters with active power decoupling. IEEE Open J. Power Electron. 2022, 3, 105–114. [Google Scholar] [CrossRef]
  27. Singh, G.; Singh, A.K. Optimizing multi-VM migration by allocating transfer and compression rate using geometric programming. Simul. Model. Pract. Theory 2021, 106, 102201. [Google Scholar] [CrossRef]
  28. Server, N. State-of-the-Art Solvers for Numerical Optimization. 2016. Available online: https://neos-server.org/neos/ (accessed on 22 June 2022).
  29. Kundu, T.; Islam, S. Neutrosophic goal geometric programming problem and its application to multi-objective reliability optimization model. Int. J. Fuzzy Syst. 2018, 20, 1986–1994. [Google Scholar] [CrossRef]
  30. Ahmad, F.; Adhami, A.Y. Spherical Fuzzy Linear Programming Problem. In Decision Making with Spherical Fuzzy Sets; Springer: Berlin/Heidelberg, Germany, 2021; pp. 455–472. [Google Scholar]
Figure 1. Illustrative figure of a cofferdam.
Figure 1. Illustrative figure of a cofferdam.
Mathematics 10 02435 g001
Figure 2. Illustrative figure of a journal bearing.
Figure 2. Illustrative figure of a journal bearing.
Mathematics 10 02435 g002
Table 1. Comparison between monomial, polynomial, and posynomial.
Table 1. Comparison between monomial, polynomial, and posynomial.
MonomialPolynomialPosynomial
(1) Deals with a single termHaving one or more termHaving one or more term
(2) Sum of monomials is not a monomialSum of polynomials is a polynomialSum of posynomials is a posynomial
(3) Subtraction of monomials is not a monomialSubtraction of polynomials is a polynomialSubtraction of posynomials is not a posynomial
(4) Multiplication of monomials is a monomialMultiplication of polynomials is a polynomialMultiplication of posynomials is a posynomial
(5) Division of a monomial by other monomials is a monomialDivision of a polynomial by other monomials is a polynomialDivision of a posynomial by other monomials is a posynomial
(6) The mathematical expression of a monomial is F ( x ) = c x 1 a 1 x 2 a 2 x n a n , c > 0 The mathematical expression of a polynomial is F ( x ) = i = 1 m c i x 1 a 1 i x 2 a 2 i x n a n i The mathematical expression of a posynomial is F ( x ) = i = 1 m c i x 1 a 1 i x 2 a 2 i x n a n i , c i > 0
(7) Example: 0.84, 9 a 3 b 9 , 17 c / a Example: 0.43, 9 a 3 b 9 , 5 c / a , 6 a 8 c Example: 0.59, 9 a 3 b 9 , 17 c / a + a 7 b 4
Table 2. Input data for the system reliability optimization problem.
Table 2. Input data for the system reliability optimization problem.
C 1 C 2 C 3 α 1 α 2 α 3 R ̲ s R ˜ s R ¯ s
150210270201510 0.6358 R ˜ s 0.9776
Table 3. Comparison between the proposed method and traditional methods.
Table 3. Comparison between the proposed method and traditional methods.
Proposed MethodTraditional Methods
(1) Deals with the varying parametersNo scope for dealing with such parameters
(2) No need to consider the fuzzy or random parameters while dealing with uncertaintyIt may require the fuzzy or random parameters while dealing with uncertainty
(3) No need to obtain the crisp or deterministic version of uncertain modelsIt requires the crisp or deterministic version of the uncertain models
(4) Sensitivity or post-optimal analysis is not required for the obtained solutionsSensitivity or post-optimal analysis can be performed separately
(5) Less computational complexity in terms of mathematical calculationsComparatively involves more computational complexity in terms of mathematical calculations
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmad, S.; Ahmad, F.; Alam, I.; Sayed, A.E.; Abdollahian, M. Modeling and Optimizing the System Reliability Using Bounded Geometric Programming Approach. Mathematics 2022, 10, 2435. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142435

AMA Style

Ahmad S, Ahmad F, Alam I, Sayed AE, Abdollahian M. Modeling and Optimizing the System Reliability Using Bounded Geometric Programming Approach. Mathematics. 2022; 10(14):2435. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142435

Chicago/Turabian Style

Ahmad, Shafiq, Firoz Ahmad, Intekhab Alam, Abdelaty Edrees Sayed, and Mali Abdollahian. 2022. "Modeling and Optimizing the System Reliability Using Bounded Geometric Programming Approach" Mathematics 10, no. 14: 2435. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop