1. Introduction
A chemical equation is only a symbolic representation of a chemical reaction and represents an expression of atoms, elements, compounds or ions. Such expressions are generated based on balancing through reactant or product coefficients, as well as through reactant or product molar masses [
1]. In fact, equilibrating the equations that represent the stoichiometry of a reacting system is a matter of mathematics, since it can be reduced to the problem of solving homogeneous linear systems.
Balancing chemical equations is an important application of generalized inverses. To discuss further, the reflexive
ginverse of a matrix has been successfully used in solving a general problem of balancing chemical equations (see [
2,
3]). Continuing in the same direction, Krishnamurthy in [
4] gave a mathematical method for balancing chemical equations founded by virtue of a generalized matrix inverse. The method used in [
4] is based on the exact computation of reflexive generalized inverses by means of elementary matrix transformations and the finitefield residue arithmetic, as it was described in [
5].
It is well known that the symbolic data processing, including both rational arithmetic and multiple modulus residue arithmetic, are time consuming, both for the implementation and for execution. On the other hand, the finitefield exact arithmetic is inapplicable to chemical reactions, which include atoms with fractional and/or integer oxidation numbers. This hard class of chemical reactions is investigated in [
6].
Additionally, the balancing chemical equations problem can be readily resolved by computer algebra software, as was discussed in [
7]. The approach used in [
7] is based on the usage of the Gaussian elimination (also known as the Gauss–Jordan algorithm), while the approach that is exploited in [
2] is based on singular value decomposition (SVD). On the other hand, it is widely known that Gaussian elimination, as well as SVD require a large amount of numerical operations [
8]. Furthermore, small pivots that could appear in Gaussian elimination can lead to large multipliers [
9], which can sometimes lead to the divergence of numerical algorithms. Two methods of balancing chemical equations, introduced in [
10], are based on integer linear programming and integer nonlinear programming models, respectively. Notice that the linear Diophantine matrix method was proposed in [
11]. The method is applicable in cases when the reaction matrices lead to infinite stoichiometricallyindependent solutions.
In the present paper, we consider the possibility of applying a new higher order iterative method for computing the Moore–Penrose inverse in the problem of balancing chemical equations. In general, our current research represents the first attempt to apply iterative methods in balancing chemical reactions.
A rapid numerical algorithm for computing matrixgeneralized inverses with a prescribed range and null space is developed in order to implement this global idea. The method is based on an appropriate modification of the hyperpower iterative method. Furthermore, some techniques for the acceleration of the method in the initial phase of its convergence is discussed. We also try to show the applicability of proposed iterative schemes in balancing chemical equations. Before a more detailed discussion, we briefly review some of the important backgrounds incorporated in our work.
The outer inverse with prescribed range
T and null space
S of a matrix
$A\in {\mathsf{\u2102}}_{r}^{m\times n}$, denoted by
${A}_{T,S}^{\left(2\right)}$, satisfies the second Penrose matrix equation
$XAX=X$ and two additional properties:
$\mathcal{R}\left(X\right)=\phantom{\rule{3.33333pt}{0ex}}T$ and
$\mathcal{N}\left(X\right)=S$. The significance of these inverses is reflected primarily in the fact that the most important generalized inverses are particular cases of outer inverses with a prescribed range and null space. For example, the Moore–Penrose inverse
${A}^{\u2020}$, the weighted Moore–Penrose inverse
${A}_{M,N}^{\u2020}$, the Drazin inverse
${A}^{D}$ and the group inverse
${A}^{\#}$ can be derived by means of the appropriate choices of subspaces
T and
S in what follows:
wherein
${A}^{\u266f}=MA{N}^{1}$ and
$\mathrm{ind}\left(A\right)$ denotes the index of a square matrix
A (see, e.g., [
12]).
Although there are many approaches to calculate these inverses by means of direct methods, an alternative and very important approach is to use iterative methods. Among many such matrix iterative methods, the hyperpower iterative family has been introduced and investigated (see, for example, [
13,
14,
15]). The hyperpower iteration of the order
p is defined by the following scheme (see, for example, [
13]):
The iteration Equation (
2) requires
p matrixmatrix multiplications (from now on denoted by mmm) to achieve the
pth order of convergence. The adoption
$p=2$ yields to the Schulz matrix iteration, originated in [
16],
with the second rate of convergence. Further, choice
$p=3$ gives the cubicallyconvergent method of Chebyshev [
17], defined as follows:
For more details about the background of iterative methods for computing generalized inverses, please refer to [
18].
The main motivation of the paper [
19] was the observation that the inverse of the reaction matrix cannot always be obtained. For this purpose, the author used an approach based on rowreduced echelon forms of both the reaction matrix and its transpose. Since the Moore–Penrose inverse always exists, replacement of the ordinary inverse by the corresponding pseudoinverse resolves the drawback that the inverse of the reaction matrix does not always exist. Furthermore, two successive transformations into the corresponding rowreduced echelon forms are timeconsuming and badlyconditioned numerical processes, again based on Gaussian elimination. Our intention is to avoid the abovementioned drawbacks that appear in previouslyused approaches.
Here, we decide to develop an application of the Schulztype methods. The motivation is based on the following advantages arising from the use of these methods. Firstly, they are totally applicable on sparse matrices possessing sparse inverses. Secondly, the Schulztype methods are useful for providing approximate inverse preconditions. Thirdly, such schemes are parallelizable, while Gaussian elimination with partial pivoting is not suitable for the parallelism.
It is worth mentioning that an application of iterative methods in finding exact solutions that involve integers or rational entries requires an additional transformation of the solution and utilization of tools for symbolic data processing. To this end, we used the programming package Mathematica.
The rest of this paper is organized as follows. A new formulation of a very high order method is presented in
Section 2. The method is fast and economical at the same time, which is confirmed by the fact that it attains a very high rate of convergence by using a relatively small number of mmm. Acceleration of the convergence via scaling the initial iterates is discussed, and some novel approaches in this trend are given in the same section. An application of iterative methods in balancing chemical equations is considered in
Section 3. A comparison of numerical results obtained by applying the introduced method is shown against the results defined by using several similar methods. Some numerical experiments concerning the application of the new iterations in balancing chemical equations are presented in
Section 4. Finally, some concluding remarks are drawn in the last section.
2. An Efficient Method and Its Acceleration
A Schulztype method of high order $p=31$ with two improvements is derived and chosen as one of the options for balancing chemical equations. The first improvement is based on a proper factorization, which reduces the number of mmm required in each cycle. The second straightening is based on a proper accelerating of initial iterations.
Toward this goal, we consider Equation (
2) in the case
$p=31$ as follows:
In its original form, the hyperpower iteration Equation (
5) is of the order 31 and requires 31 mmm. It is necessary to remark that a kind of effectiveness of a computational iterative (fixed pointtype) method can be estimated by the real number (called the computational efficiency index)
$EI={p}^{\frac{1}{\theta}}$, wherein
θ and
p stand for the whole computational cost and the rate of convergence per cycle, respectively. Here, the most important burden and cost per cycle is the number of matrixmatrix products.
Clearly, in Equation (
5), this proportion between the order of convergence and the needed number of mmm is not suitable, since its efficiency index:
is relatively small. This shows that Equation (
5) is not a useful iterative method. To improve the applicability of Equation (
5) and, so, to derive a fast matrix iteration with a reduced number of mmm,
i.e., to obtain an efficient method, we keep going as in the following subsection.
2.1. An Efficient Method
We rewrite Equation (
5) as:
Subsequently, Equation (
7) results in the following formulation of Equation (
5):
Now, we could deduce our final fast matrix iteration by simplifying Equation (
8) more:
where only nine mmm are required. Therefore, the efficiency index of the proposed fast iterative method becomes:
The efficiency index
$1.4645$ of Equation (
9) is higher than the efficiency index 1.4142 of Equation (
3), higher than the efficiency index 1.4422 of Equation (
4) and finally higher than the efficiency index 1.4592 of the 30th order method proposed recently by Sharifi
et al. [
20].
At this point, it would be useful to provide the following theorem regarding the convergence behavior of Equation (
9).
Theorem 1. Let $A\in {\mathbb{C}}_{r}^{m\times n}$ be a given matrix of rank r and $G\in {\mathbb{C}}_{s}^{n\times m}$ be a given matrix of rank $0<s\le r$, which satisfy $\mathrm{rank}\left(GA\right)=\mathrm{rank}\left(G\right)$. Then, the sequence ${\left\{{X}_{k}\right\}}_{k=0}^{k=\infty}$ generated by the iterative method Equation (
9)
converges to ${A}_{R\left(G\right),N\left(G\right)}^{\left(2\right)}$ with 31storder if the initial approximation ${X}_{0}=\alpha G$ satisfies: Proof. The proof of this theorem would be similar to the ones in [
21]. Hence, we skip it over and just include the following error bound:
wherein
${E}_{k}={A}_{R\left(G\right),N\left(G\right)}^{\left(2\right)}{X}_{k}$. ☐
The derived iterative method is very fast and effective, in contrast to the existing iterative Schulztype methods of the same type. However, as was pointed out by Soleimani
et al. [
22], such iterative methods are slow at the beginning of the iterative process, and the real convergence rate cannot be observed. An idea to remedy this disadvantage is to apply a multiple rootfinding algorithm on the matrix equation
$F\left(X\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}{X}^{1}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}A\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0$ and to try to accelerate the hyperpower method in its initial iterations. Such a discussion about a scaled version of the hyperpower method is the main aim of the next subsection.
2.2. Accelerating the Initial Phase
Another useful motivation of the present paper is processed here. The iterative scheme:
was applied in [
22] to achieve the convergence phase in the main (modified Householder) method much more rapidly and to accelerate the beginning of the process. In the second iteration phase, it is sufficient to apply the introduced fast and efficient modified Householder method, which then reaches its own full speed of convergence [
22].
In the same vein, the iterative expression Equation (
13) can be rewritten in the following equivalent, but more practical form:
One can now observe that Equation (
14) is the particular case (
$p=2$) of the following new scheme:
Therefore, Equation (
15) can be considered as an important acceleration of the Schulztype method Equation (
2) in the initial phase, before the convergence rate is practically achievable. We note that such accelerations are useful for large matrices, whereas the iterative methods require too much iterations to converge. Particularly, by following Equation (
15), one can immediately notice that the accelerating in the initial phase of iteration Equation (
9) is of the form:
Remark 1. The particular choice $\beta =1$ in Equation (
15)
reduces these iterations to the usual hyperpower family of the iterative methods possessing the order $p\ge 2$:The choice $p=2$ in Equation (
15)
leads to the scaled Schulz matrix iteration considered recently in [23], and the choice $p=2$, $\beta =1$ produces the original Schulz matrix iteration, originated in [12]. Finally, a hybrid algorithm may be written now by incorporating Equations (
9) and (
16) as follows.
Algorithm 1 The new hybrid method for computing generalized inverses. 
 1:
The input is given ${X}_{0}\in {\mathbb{C}}^{n\times m}.$  2:
Use Equation ( 16) until $\parallel {X}_{l+1}{X}_{l}\parallel <\delta $ (for an inner loop counter l or $\u03f5<\delta $).  3:
set ${X}_{0}={X}_{l}$  4:
for $k=0,1,\dots $ until convergence ( $\parallel {X}_{k+1}{X}_{k}\parallel <\u03f5$), use Equation ( 9) to converge with high order.  5:
end for

Instead of the hybrid Algorithm 1, based on the usage of Equation (
16) in the initial phase and Equation (
9) in the final stage, our third result here is to define a unique iterative method, which can be derived by applying variable acceleration parameter
$\beta =1+{\beta}_{k}$,
$0\le {\beta}_{k}\le 1$. This approach yields scaled hyperpower iterations of the general form:
where the initial approximation
${X}_{0}=\alpha G$ is chosen according to Equation (
11).
Furthermore, it is possible to propose various modifications of
${\beta}_{k}$ in a manner that guarantees
$1+{\beta}_{k}\to 1$. For example:
3. Balancing Chemical Equations Using Iterations
In accordance with the intention that was motivated in the first section, in this section, we investigate the applicability of some iterations from the hyperpower family in balancing chemical equations. It is shown that the iterative methods can be applied successfully without any limitations.
It is assumed that a chemical system is modeled by a single reaction of the general form (see, for example, [
2]):
In Equation (
20),
${x}_{j},j=1,\dots ,r$ (resp.
${x}_{j},j=r+1,\dots ,r+s$) are unknown rational coefficients of the reactants (resp. the products),
${\Psi}^{i},{\Omega}^{i}$,
$i=1,\dots ,m$ are chemical elements in reactants and products, respectively, and
${a}_{ij},i=1,\dots ,m,j=1,\dots ,r$ and
${b}_{ij},i=1,\dots ,m,j=r+1,\dots ,r+s$ are the numbers of atoms
${\Psi}^{i}$ and
${\Omega}^{i}$, respectively, in the
jth molecule.
3.1. Balancing Chemical Equations Using Iterative Methods
The coefficients
${x}_{i}$ are integers, rational or real numbers, which should be determined on the basis of three basic principles: (1) the law of conservation of mass; (2) the law of conservation of atoms; (3) the timeindependence of Equation (
20), an assumption usually valid for stable/nonsensitive reactions. Let there be
m distinct atoms involved in the chemical reaction Equation (
20) and
$n=r+s$ distinct reactants and products. It is necessary to form an
$m\times n$ matrix
A, called the reaction matrix, whose columns represent the reactants and products and the rows represent the distinct atoms in the chemical reaction. More precisely, the
$(i,j)$th element of
A, denoted by
${a}_{i,j}$, represents the number of atoms of type
i in each compound/element (reactant or product). An arbitrary element
${a}_{i,j}$ is positive or negative according to whether it corresponds to a reactant or a product. Hence, the balancing chemical equation problem can be formulated as the homogeneous matrix equation:
with respect to the unknown vector
$x\in {\mathbb{R}}^{n}$, where
$A\in {\mathbb{R}}^{m\times n}$ denotes the reaction matrix and 0 denotes the null column vector of the order
m. In this way, an arbitrary chemical reaction can be formulated as a matrix equation.
We would like to use the symbolic and numerical possibilities of the Mathematica computer algebra system in conjunction with the abovedefined iterative method(s) for computing generalized inverses to automatize the chemical reactions balancing process.
The general solution of the balancing problem in the matrix form Equation (
21) is given by:
where
c is the arbitrarilyselected
ndimensional vector. Let us assume that the approximation of
${A}^{\u2020}$ generated by an arbitrary iterative method is given by
$X:={X}_{k+1}$.
If the iterative method for computing
X is performed in the floating point arithmetic, it is necessary to perform a transition from the solution whose coordinates are real numbers into an exact (integer and/or rational) solution. Thus, the iterative approach in balancing chemical equations assumes three general algorithmic steps, as is described in Algorithm 2.
Algorithm 2 General algorithm for balancing chemical equations by an iterative solver. 
 1:
Apply (for example) Algorithm 1 and compute the approximation $X:={X}_{k+1}$ of ${A}^{\u2020}$.  2:
Compute the vector s using Equation ( 22).  3:
Transform real numbers included in s into an exact solution.

A clear observation about Algorithms 2 is the following:
 
Steps 1 and 2 require usage of real arithmetic (with very high precision);
 
Step 3 requires usage of symbolic processing and exact arithmetic capabilities to deal with rational numbers.
As a result, in order to apply iterative methods to the problem of balancing chemical equations, it is necessary to use a software that meets two diametricallyopposite criteria: the ability to carry out numerical calculations (with very high precision) and the ability to apply the exact arithmetic and symbolic calculations. The programming language
Mathematica possesses both of these properties. More details about this programming language can be found in [
24].
The following (sample)
Mathematica code can be used to determine the exact solution using real values contained in the vector
s (defined in Equation (
22)).
Id = IdentityMatrix[n]; s = (Id  X.A).ConstantArray[1, n];
s = Rationalize[s, 10^(300)]; c = s*LCM @@ Denominator /@ s;
(* Multiply s by the Least Common Multiple of denominators in s *)
s = c/Min @@ Numerator /@ c
(* Divide c by the Minimum of numerators in c *)
The standard Mathematica function Rationalize[x,dx] yields the rational number with the smallest denominator within a given tolerance $dx$ of x. Sometimes, to avoid the influence of roundoff errors and possible errors caused by the usage of the function Rationalize, it is necessary to perform iterative steps with a very high precision.
An improvement of the vector s can be attained as follows. It is possible to propose an amplification of the vector $s=\left(I{A}^{\u2020}A\right)c$, where c is an ndimensional column vector. The improvement can be obtained using $s=\left(I{A}^{\u2020}A\right)\left(\left(I{A}^{\u2020}A\right)c\right)$. In the practical implementation, it is necessary to replace the expression s = (Id  X.A).ConstantArray[1, n] by s = (Id  X.A).((Id  X.A).ConstantArray[1, n]).
This replacement can be explained by the fact that $A(I{A}^{\u2020}A)s$ is closer to the zero vector zero than $As$.
3.2. Balancing Chemical Equations in Symbolic Form
As was explained in [
6], balancing
ℵ chemical reactions that possess atoms with fractional oxidation numbers and nonunique coefficients is an extremely hard problem in chemistry. The case when the system Equation (
21) is not uniquely determined can be resolved using the
Mathematica function
Reduce. If a
ℵ chemical reaction includes
n reaction molecules and
m reaction elements, then the reaction matrix
A is of the order
$m\times n$. In the case
$n>m$, the reaction has
$\left(\genfrac{}{}{0pt}{}{n}{m}\right)$ general solutions. All of them can be found applying the following expression:
where the zero vector in the righthand side is of the length
m and
$\{xk1,xk2,...,xkm\}$ is the list of dependent variables.
4. Experimental Results
Let us denote the iterations Equation (
3) by NM (Newton’s Method), the iterations Equation (
4) by CM (Chebyshev’s Method) and Equation (
9) by PM (Proposed Method). Here, we apply different methods in the Mathematica 10 environment to compute some generalized inverses and to show the superiority of our scheme(s). We also denote the hybrid algorithm given in [
22] by HAL (Householder Algorithm) and our Algorithm 1 is denoted by APM (Accelerated Proposed Method). Throughout the paper the computer characteristics are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz with 4 GB of RAM, unless stated otherwise (as in the end of Example 1).
4.1. Numerical Experiments on RandomlyGenerated Matrices
Example 1. [22] In this numerical experiment, we compute the Moore–Penrose inverse of a dense, randomlygenerated $m\times n=800\times 810$ matrix, which is defined as follows: The numerical results corresponding to the number of iterations and the CPU time are illustrated in
Table 1, wherein IT denotes the number of iterative steps. It shows that APM with
$m=2$ and five inner loops, while
$p=2$, is superior to the other existing methods. We employed HAL with
$m=2$ and eight inner loops. Note that the initial matrix is chosen as
${X}_{0}=\frac{2}{{\parallel A\parallel}_{F}^{2}}{A}^{*}$, while the stopping criterion is defined by
$\frac{\left\right{X}_{k+1}{X}_{k}{\left\right}_{\infty}}{\left\right{X}_{k+1}{\left\right}_{\infty}}<{10}^{300}$. Here,
${\parallel \xb7\parallel}_{F}$ stands for the Frobenius norm (Hilbert–Schmidt norm), which is for an
$m\times n$ matrix
A defined as:
where
${A}^{*}$ denotes the conjugate transpose of
A,
${\sigma}_{i}$ are the singular values of
A and the trace function is used.
Table 1.
The results corresponding to the Moore–Penrose inverse of a randomlygenerated matrix. IT, number of iterative steps.
Table 1.
The results corresponding to the Moore–Penrose inverse of a randomlygenerated matrix. IT, number of iterative steps.
Methods  NM  CM  HAL  APM 

IT  28  18  14  9 
Time  15.7656250  15.0000000  14.0156250  13.4687500 
It is important to emphasize that the computational time is directly initiated by computer and software specifications. To clarify this, we execute our implemented algorithms/methods from Example 1 on a more recentlyfeatured computer, whose characteristics are Windows 7 Ultimate, Intel(R) Core(TM) i54400 CPU 3.10 GHz with 8 GB of RAM and 64 Operating System. The results corresponding to this hardware/software configuration are given in
Table 2 in terms of the elapsed CPU time. Furthermore, we reran Example 1 for an
$m\times n=1010\times 1000$ matrix, which was randomly generated by the code:
to show that our schemes can also simply be applied on matrices satisfying
$m\ge n$. The results generated by these values are arranged in
Table 3, where
$m=3$ inner loops are considered for APM.
Table 2.
The results corresponding to the Moore–Penrose inverse of randomlygenerated matrices with a better equipped computer.
Table 2.
The results corresponding to the Moore–Penrose inverse of randomlygenerated matrices with a better equipped computer.
Methods  NM  CM  HAL  APM 

Time  1.620093  1.570090  1.363078  1.279073 
Table 3.
The results corresponding to the Moore–Penrose inverse of the randomlygenerated matrix ${A}_{1010\times 1000}$.
Table 3.
The results corresponding to the Moore–Penrose inverse of the randomlygenerated matrix ${A}_{1010\times 1000}$.
Methods  NM  CM  HAL  APM 

IT  28  18  14  8 
Time  2.714405  2.605205  2.371204  2.293204 
The numerical example illustrates the theoretical results presented in
Section 2. It can be observed from the results included in
Table 1,
Table 2 and
Table 3 that, firstly, like the existing methods, the presented method shows a stable behavior along with a fast convergence. Additionally, according to results contained in
Table 1,
Table 2 and
Table 3, it is clear that the number of iterations required in the APM method during numerical approximations of the Moore–Penrose inverse is smaller than the number of approximations generated by the classical methods. This observation is in accordance with the fact that the efficiency index is clearly the largest in the case of the APM method. In general, APM is superior among all of the existing famous hyperpower iterative schemes. This superiority is in accordance with the theory of efficiency analysis discussed before.
In fact, it can be observed that increasing the efficiency index by a proper factorization of the hyperpower method is a kind of nice strategy that gives promising results in terms of both the number of iterations and the computational time on different computers.
Here, it is also worth noting that Schulztype solvers are the best choice for sparse matrices possessing sparse inverses. Since, in such cases, the usual SVD technique in the software, such as Mathematica or MATLAB, ruins the sparsity pattern and requires much more time, hence such iterative methods and the SVDtype (direct) schemes are both competitive, but have their own fields of applications.
4.2. Numerical Experiments in Balancing Chemical Equations
In this subsection, we present some clear examples indicating the applicability of our approach in the balancing chemical equations. We also apply the following initial matrix ${X}_{0}=\frac{1}{{\sigma}_{1}^{2}}{A}^{*}$.
Example 2. Consider a specific skeletal chemical equation from [10]:where the lefthand side of the arrow consists of compounds/elements called reactants, while the righthand side comprises compounds/elements called the products. Hence, Equation (
24)
is formulated as the homogeneous equation $Ax=0$, wherein 0 denotes the null column vector and: The results generated after the comparison of numerical results derived in Example 2 are given in Table 4, using 300 precision digits, being large enough to minimize roundoff errors, as well as to clearly observe the computed asymptotic error constants in the convergence phase. Although in all practical problems, the machine precision (double precision) is enough (just like Example (1)), here, our focus is to find very accurate coefficients for the chemical equation, since a very accurate tolerance, such as $\parallel {X}_{k}{A}^{\u2020}{\parallel}_{\infty}\le {10}^{150}$,
must be incorporated. The final exact coefficients are defined as
${({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5})}^{\mathrm{T}}={(2,4,1,3,1)}^{\mathrm{T}}$. Thus,
Experimental results clearly show that PM is the most efficient method for this purpose. In addition, we remark that since we use iterative methods in floating point arithmetic to obtain the coefficient, we must use the command $\mathtt{Round}\left[\right]$ in the last lines of our written Mathematica code, so as to attain the coefficients in exact arithmetic.
Table 4.
The results corresponding to balancing Equation (
24).
Table 4.
The results corresponding to balancing Equation (24).
Methods  NM  CM  PM 

IT  14  9  3 
$\parallel {X}_{k+1}{A}^{\u2020}\parallel $  $2.59961\times {10}^{161}$  $1.1697\times {10}^{193}$  $9.09312\times {10}^{293}$ 
In order to support the improvement described in
Section 3, it is worth mentioning that (using Mathematica notations):
and
$\mathtt{A}.(\mathtt{Id}\mathtt{X}.\mathtt{A})((\mathtt{Id}\mathtt{X}.\mathtt{A}).\mathtt{ConstantArray}[\mathtt{1},\mathtt{n}])=\{\mathtt{0}.*{\mathtt{10}}^{\mathtt{295}},\mathtt{0}.*{\mathtt{10}}^{\mathtt{295}},\mathtt{0}.*{\mathtt{10}}^{\mathtt{295}},\mathtt{0}.*{\mathtt{10}}^{\mathtt{295}}\}.$Example 3. Now, we solve the following skeletal chemical equation from [10]: Equation (
27)
is formulated as a homogeneous system of linear equations with the following coefficient matrix: The results of this experiment generated by using the ordinary double precision arithmetic and the stopping termination $\parallel {X}_{k}{A}^{\u2020}{\parallel}_{\infty}\le {10}^{10}$ are illustrated in Figure 1. Note that the final coefficients obtained in exact arithmetic are equal to ${({x}_{1},\dots ,{x}_{20})}^{\mathrm{T}}={(2,3,3,6,6,6,10,12,15,20,88,2,3,3,6,6,6,10,15,79)}^{\mathrm{T}}$.
The results once again show that PM is the best iterative process.
Figure 1.
Convergence history for different methods used in Example 3.
Figure 1.
Convergence history for different methods used in Example 3.
Example 4. Consider the following example from [19]: The reaction matrix A can be derived by taking into account both the law of conservation of atoms and the law of electrical neutrality, and it is equal to (see [
19]
): As in the previous examples, let us denote by X the result derived by the iterative method Equation (
9).
The rational approximation of $s=\mathtt{A}.(\mathtt{Id}\mathtt{XA})\left(\right(\mathtt{Id}\mathtt{X}.\mathtt{A}).\mathtt{ConstantArray}[\mathtt{1},\mathtt{n}\left]\right)$ is equal to $\left\{\frac{2}{11},\frac{16}{11},\frac{10}{11},\frac{2}{11},\frac{8}{11},\frac{10}{11}\right\}$, and the exact solution coincides with the result given in [19]: $\{1,8,5,1,4,5\}$. Example 5. In this example, it is shown that our iterative method is capable of producing the solution in the case when the real coefficients are used and the reaction is not unique within relative proportions. Let us consider Reaction 1 with one arbitrary element from [6]: The reaction matrix of the homogeneous system Equation (
21)
is given by $A\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\left\{\right\{1,0,0.987,2,3\},\{0,2,1,3,4\left\}\right\}$. The iterative method Equation (
9)
converges quickly, since the list of consecutive errors $\parallel {X}_{k}{A}^{\u2020}{\parallel}_{\infty}$ is given by $\{0.163381,2.220446049250313\times {10}^{16}$, $2.7755575615628914\times {10}^{16}\}.$ The approximate solution $s=\mathtt{A}.(\mathtt{Id}\mathtt{X}.\mathtt{A})\left(\right(\mathtt{Id}\mathtt{X}.\mathtt{A}).\mathtt{ConstantArray}[\mathtt{1},\mathtt{n}\left]\right)$ is equal to: and its fractional approximation is: Example 6. All possible solutions of the problem considered in Example 5 with respect to $x1$ and $x2$ can be generated using the Mathematica function Reduce:
Reduce[A.{{x1}, {x2}, {x3}, {x4}, {x5}} == {0, 0}, {x1, x2}
All solutions in the symbolic form are given as follows (using Mathematica notations):wherein $x3,x4,x5$ are arbitrary real quantities. All possible solutions with respect to $x1$ and $x3$ can be generated using: Reduce[A.{{x1}, {x2}, {x3}, {x4}, {x5}} == {0, 0}, {x1, x3}]
All possible $\left(\genfrac{}{}{0pt}{}{5}{2}\right)=10$ cases can be solved in the same way.
Example 7. As the last experiment and to show that the proposed iteration could preserve the sparsity pattern of the inverses if the inverses are sparse in nature, the following $4000\times 4000$ matrix $\mathtt{A}=\mathtt{ExampleData}\left[\u201c\mathtt{Matrix}\u201d,\u201c\mathtt{Bai}/\mathtt{tols}\mathtt{4000}\u201d\right]$ has been taken from Matrix Market database with the stopping termination $\frac{\parallel {X}_{k+1}{X}_{k}{\parallel}_{\infty}}{\parallel {X}_{k+1}{\parallel}_{\infty}}\le {10}^{10}$. The new scheme Equation (9) converges in twelve iterations. The matrix plots of the approximate inverse for this case are brought forward in Figure 2.
Figure 2.
The sparsity pattern for the approximate inverses: ${X}_{1}$ (top left); ${X}_{2}$ (top right); ${X}_{11}$ (bottom left); and ${X}_{12}$ (bottom right).
Figure 2.
The sparsity pattern for the approximate inverses: ${X}_{1}$ (top left); ${X}_{2}$ (top right); ${X}_{11}$ (bottom left); and ${X}_{12}$ (bottom right).