Next Article in Journal
TS Fuzzy Robust Sampled-Data Control for Nonlinear Systems with Bounded Disturbances
Previous Article in Journal
Video-Based Deep Learning Approach for 3D Human Movement Analysis in Institutional Hallways: A Smart Hallway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods

Department of Engineering Geodesy, Saint-Petersburg Mining University, 199106 Saint-Petersburg, Russia
*
Author to whom correspondence should be addressed.
Submission received: 8 November 2021 / Revised: 29 November 2021 / Accepted: 1 December 2021 / Published: 3 December 2021
(This article belongs to the Section Computational Engineering)

Abstract

:
Due to the huge amount of redundant data, the problem arises of finding a single integral solution that will satisfy numerous possible accuracy options. Mathematical processing of such measurements by traditional geodetic methods can take significant time and at the same time does not provide the required accuracy. This article discusses the application of nonlinear programming methods in the computational process for geodetic data. Thanks to the development of computer technology, a modern surveyor can solve new emerging production problems using nonlinear programming methods—preliminary computational experiments that allow evaluating the effectiveness of a particular method for solving a specific problem. The efficiency and performance comparison of various nonlinear programming methods in the course of trilateration network equalization on a plane is shown. An algorithm of the modified second-order Newton’s method is proposed, based on the use of the matrix of second partial derivatives and the Powell and the Davis–Sven–Kempy (DSK) method in the computational process. The new method makes it possible to simplify the computational process, allows the user not to calculate the preliminary values of the determined parameters with high accuracy, since the use of this method makes it possible to expand the region of convergence of the problem solution.

1. Introduction

Over the past thirty years, surveying and geodetic equipment has made a great leap forward. Such a rapid development of technology allowed surveyors to receive and process an enormous amount of data about objects. The use of devices, such as tacheometers, laser trackers and scanning laser systems, as well as satellites in surveying and geodetic practice, made it possible to increase the speed and accuracy of the data obtained. The use of modern surveying and geodetic methods in the construction of buildings is especially important; this is noted in works [1,2], as well as when determining deformations [3].
An important element in the solution of any surveying and geodetic problems is the office processing of measurement results (rejection of gross errors, equalization, assessment of the accuracy of the solutions obtained). Redundancy of measurements increases the accuracy and plausibility of the obtained solutions; however, as the amount of information obtained increases, the complexity of data processing also increases, as noted in articles [4,5]. There is a need to use computers with high performance characteristics in order to solve the problem in special software; this thesis is confirmed in the works of L.A. Goldobina [6], N.S. Kopylova [7], V.F. Kovyazina [8], A.M. Rybkina [9,10] et al. [11,12]. The development of computer technology makes it possible to automate the solution of many engineering problems, as well as to carry out computational experiments by modeling; in the works of authors, such as P.A. Demenkova. [13,14], N.V. Vasilyeva [15], E.V. Katuntsova [16] and A.A. Kochnevoy [17] et al. [18,19,20], special software products were used to solve engineering problems. Nevertheless, the redundancy of measurements allows the surveyor to choose the optimal solution, taking into account the limiting criteria. However, in a situation where the data array is huge and the power of the computer does not allow quick processing and the obtaining of the result, the solution process can be optimized by various methods. In this regard, the topic of optimizing solutions for various industrial surveying and geodetic problems is very relevant; this idea finds its confirmation in articles [21,22,23].
In the mathematical aspect, optimization should be understood as a sequence of actions, the implementation of which contributes to obtaining a solution or clarifying an existing one. Optimization methods have been used for a long time in geodesy and surveying; the famous geodesist and mathematician K. Gauss is the author of many papers on this topic. There are many groups of optimization methods that can be applied in geodesy and surveying. It should be noted that the problems associated with solving nonlinear equations differ from linear problems in that there is no single, standard solution method. Depending on the restrictive conditions and the objective function type, a different set of solutions can be obtained, the best of which shall be chosen. Therefore, the study of the possibility of using various methods to solve problems of a certain group is the best way to choose an optimization method for solving a specific problem. The article discusses methods of nonlinear programming. For a number of reasons, these methods are best suited for their implementation in the surveying and geodetic computational process, namely:
(1)
nonlinear programming methods allow the nonlinear and linear conditions that limit the objective function to be taken into account;
(2)
these methods allow the solving of large systems of equations using algorithms that are most suitable for implementation on modern computers;
(3)
using some nonlinear programming methods (such as second-order Newton’s method) makes it possible to solve nonlinear equations without linearizing the original parametric equations;
(4)
using nonlinear programming methods, it is possible to obtain a solution not only using the objective function of the least squares method, which is a classical method in geodesy and surveying, but also in other ways in accordance with the selected criterion function.
The third point is especially important, since there are a lot of problems in geodesy and surveying, where the desired parameters can be determined by solving nonlinear systems of equations, for example: calculating transition keys, equalizing surveying and geodetic networks and building terrain models. The fourth point makes it possible to experiment and choose other optimization criteria, different from the least squares method.
From all of the above, it can be concluded that it is advisable not only to apply nonlinear programming methods in surveying and geodetic computations, but also to improve their algorithms for geodesy and surveying in the future. Among the methods of nonlinear programming, two main groups can be distinguished—these are methods based on the use of derivatives of various orders and methods that calculate the extremum point without using derivatives (direct search methods). In this work, the methods of the first group are considered in detail, since their use provides a number of advantages:
(1)
a large number of previously developed methods that have clearly formulated algorithms that are easy to implement with a computer;
(2)
the ability to use several methods at once at different stages of solving one problem, in order to obtain the best result.
It should be said that the methods of this group have serious downsides. The main one is the preliminary preparation of the problem for the solution. It is necessary to calculate derivatives of different orders at each iteration; for this, an algorithm is drawn up in advance, according to which the derivatives will be calculated for a specific objective function. It takes a particularly long time if the function is not specified analytically, then it becomes necessary to calculate the derivatives by a numerical method. It is also necessary to take into account that the objective function shall be continuous, otherwise the problem will have no solution. These downsides are reflected by G.G. Shevchenko in her works [24,25] and she proposes to use direct search methods (which do not use derivatives in the iterative process) when solving surveying and geodetic optimization problems.
The article analyzes the possibility of applying the second-order Newton’s method, when solving surveying and geodetic optimization problems; in particular, when equalizing the surveying and geodetic network of trilateration on a plane. Today, design and equalization of surveying and geodetic constructions using new methods is a very relevant topic; this is confirmed in works [24,25,26]. Newton’s method was chosen because it has the following upsides:
(1)
the method has a quadratic convergence rate of the iterative process, in contrast to first-order methods (gradient methods), which have a linear convergence rate;
(2)
for any quadratic objective function with a positive definite matrix of second partial derivatives (Hessian matrix), the method gives an exact solution in one iteration;
(3)
low sensitivity to the choice of preliminary values of the determined parameters, in comparison with gradient methods.
The second-order Newton’s method was used to equalize the planned trilateration network.

2. Materials and Methods

2.1. Mathematical Justification for Solving the Task

The second-order Newton’s method is included in the group of nonlinear programming methods of the second order [26,27]. More generally, the second-order Newton’s method is an iterative method that applies a quadratic approximation to the original nonlinear objective function at each iteration. To evaluate the convergence of the method, a necessary condition is the threefold differentiability of the studied function. The existence of the second derivative at the extremum point provides a high rate of convergence of the method, in comparison with the first-order methods [28,29]. The method was studied in detail in the work of N.N. Eliseeva [30] and in the works [31,32,33], and was also applied by the authors of the article in [34]. However, the possibility of using the method in surveying and geodetic practice, when solving production problems, has almost not been studied. There were a number of objective reasons for this, which will be discussed below.
To derive the main formula of the second-order Newton’s method, it is necessary to expand the original objective function in a Taylor series (1):
f ( x ) f ( x ) + f ( x ) ( x x ) + 1 2 f ( x ) ( x x ) 2 ,
where f ( x ) is the first-order derivative with respect to the function f ( x ) , x is the minimum point of the function, f ( x ) is the matrix of the second derivatives of the objective function f ( x ) in point x .
The second-order Newton’s method is based on the quadratic approximation of the function; therefore, the first three terms are taken into account in the Taylor series to derive the iterative formula [27,35]. Having received the value x , it is possible to calculate the next approximation x k + 1 to the extremum point. Replacing in the Formula (1) x with x k , and x with x k + 1 , also marking Δ x k = x k + 1 x k , one can get the Formula (2):
f ( x k + 1 ) f ( x k ) + f ( x k ) Δ x k + 1 2 f ( x k ) Δ x k 2 .
To determine the extremum in the direction Δ x k , it is necessary to differentiate the function f ( x k + 1 ) for each of the components Δ x k and equate the resulting expression to zero (3):
f ( x k ) + f ( x k ) Δ x k = 0 .
Expressing the variable x k + 1 from Formula (3), the main formula of the second-order Newton’s method is obtained, according to which the iterative process (4) is constructed:
x k + 1 = x k f ( x k ) f ( x k ) ,
where f ( x k ) is the first derivative of the function f ( x ) in point x k in the approximation k ; f ( x k ) is the second derivative of the function f ( x ) in point x k in the approximation k .
Formula (4) describes an iterative process for a function of one variable. By writing expression (4) in matrix form (5), one can obtain an iterative formula of the method for the multidimensional case (functions of several variables):
X k + 1 = X k H k 1 f k ,
where f k is the column vector of the matrix of the first derivatives (gradient) of the objective function in the approximation k , H k is the matrix of the second partial derivatives (Hessian matrix) of the objective function with the target dimension of n × n in the approximation k ; X k is the column vector of the determined parameters in the approximation k ; X k + 1 is the column vector of the determined parameters in the approximation k + 1 [33].
A distinctive feature of the classical second-order Newton’s method is that it is not necessary to determine the iteration step in the iterative process. The rate of convergence, as well as the direction of the search, depends on the Hessian matrix (6):
H ( x 1 , , x n ) = ( 2 f ( x 1 , , x n ) x 1 x 1 2 f ( x 1 , , x n ) x 1 x 2 2 f ( x 1 , , x n ) x 1 x n 2 f ( x 1 , , x n ) x 2 x 1 2 f ( x 1 , , x n ) x 2 x 2 2 f ( x 1 , , x n ) x 2 x n 2 f ( x 1 , , x n ) x n x 1 2 f ( x 1 , , x n ) x n x 2 2 f ( x 1 , , x n ) x n x n ) .
The main downside of the second-order Newton’s method is the calculation of the Hessian matrix [27]; therefore, this method is little used in practice, since the calculation of the Hessian matrix at each iteration is a rather complicated computational process. However, the introduction of personal computers made it possible to automate the process of calculating the Hessian matrix. The calculation of partial derivatives can be implemented by a numerical method using one of the programming languages. Due to this, the problem of calculating partial derivatives for any objective function can be fully automated.
At each iteration, it is necessary to determine the sign of the Hessian matrix. The matrix of the second partial derivatives at each iteration shall be positive definite H ( f ) > 0 ; only if this condition is met, will the search direction lead to a decrease in the objective function f ( x ) . In iterations where the Hessian matrix is negatively defined, H ( f ) < 0 , the direction of the search for the minimum target shall be replaced. In this work, it is proposed to use gradient methods at iterations where the matrix of second derivatives is negative to determine the direction of decrease of the objective function [36].
The main advantages of the second-order Newton’s method:
(1)
if the function is quadratic, then to find the minimum of the objective function f ( x ) , when the preliminary values of the determined parameters are close to the true ones, one iteration is required;
(3)
the use of the second partial derivatives in the iterative process allows the increase of the convergence rate, and also to increase the accuracy of the results;
(3)
this method is less sensitive to the choice of the initial value of the parameter than the first-order methods.
If the objective function f ( x ) is not quadratic, then k iterations are required to reach the extremum point until the condition for stopping the iterative process is satisfied [37].
Second-order Newton’s method has a number of disadvantages that must be taken into account when implementing it. Calculating the first and second-order derivatives (finite differences) numerically, the accuracy and speed of the method decreases. This is not only due to approximate calculations, but also due to inaccurate approximation of the original objective function. This aspect is especially perceptible in space, around the minimum point, since the first-order derivatives become rather small quantities. When the objective function is not quadratic, the iterative process can loop. As mentioned above, it is necessary at each iteration to check the positive definiteness of the Hessian matrix, since this is the main condition for the convergence of the method. The sign of the Hessian matrix is checked by the Sylvester criterion. The complexity of setting the initial parameter when the function is defined to a small extent (lack of initial data). The need to calculate the second partial derivatives of the function to be minimized. As stated above, the second-order Newton’s method was not used in geodesy and surveying due to the complexity of its execution (at that time, the impossibility of complete automation of the computational process).
The convergence rate of Newton’s method in the vicinity of a strictly local minimum point is very high (quadratic). The method will not work if the Hessian matrix is degenerate (the determinant of the matrix is zero), and this method may also diverge [38].
The high rate of convergence of Newton’s method can be explained by the fact that the quadratic trinomial (2), constructed by taking into account information about both the first and the second derivatives of the objective function, approximates a convex twice differentiable nonlinear function with high accuracy in a sufficiently small neighborhood of this point.
The process of finding the optimal solution using nonlinear programming methods has an iterative nature, which means that, with an increase in the number of iterations, the probability of arriving at the correct solution increases. An important element of the correct operation of all iterative methods is the criterion (rule) for stopping the computational process. It is this criterion that sets the accuracy (from the point of view of mathematics, not geodesy) of achieving a solution, as well as the effectiveness of the method and the amount of computation.
The following stopping criteria are most common in optimization theory:
  • By the absolute value of the difference between the subsequent and previous values of the determined parameter (7):
    | x k + 1 x k | ε .
  • By the absolute value of the difference between the values of the objective function, the next and the previous iteration (8):
    | f ( x k + 1 ) f ( x k ) | ε .
  • By the absolute value of the derivative of the objective function at the current iteration (9):
    | f ( x ) x ε | .
Using only one of the criteria can lead to a “false” decision; therefore, it is recommended to take into account several installation criteria in the software algorithm. In all three criteria, the values are less than a known number ε . The user sets this number himself /herself, based on practical experience in solving problems, or after calculating it using formulas.
In this work, the authors analyze the data obtained using methods of the first and second-order, which in the iterative process use derivatives of various orders. Therefore, the authors consider it necessary to note in the work the methods allowing the calculation of derivatives of various orders.
One way to calculate derivatives is by numerical differentiation. Mathematicians turn to it when calculating derivatives of functions given in a table or direct differentiation is difficult. The latter, for example, arises in the case of a complex analytical form of a function. Then, the derivative is interpolated. To calculate the first-order derivative, the Formula (10) can be used:
δ f ( x 1 , , x n ) δ x n = f ( x 1 + h , , x n ) f ( x 1 h , , x n ) 2 h ,
where δ f ( x 1 , , x n ) δ x n is the first derivative of the objective function f ( x 1 , , x n ) with respect to the parameter x n ; h is a small increment to the objective function argument.
The increment value h affects the accuracy of the resulting derivative value and the amount of computation. If selecting a very small h round-off error, when calculating with a computer, it can be comparable to or greater than h . An algorithm that reduces the error in calculating the derivative is represented by Formula (10). Formula (10) is called the central difference scheme and, according to [33], is the best way to calculate the first-order derivative.
By analogy with obtaining a difference scheme for calculating the first derivative, one can obtain a formula for calculating the second-order derivative of the objective function. The formula for calculating the second-order derivative has the Formula (11):
δ f ( x 1 , , x n ) δ x n δ x n = f ( x 1 + h , , x n ) 2 f ( x 1 , , x n ) + f ( x 1 h , , x n ) h 2 ,
where δ f ( x 1 , , x n ) δ x n δ x n is the second-order derivative of the objective function f ( x 1 , , x n ) with respect to parameter x n .
In the article, the second-order Newton’s method is compared with the conjugate gradient method (the first-order method). This method was chosen because it is the most common in geodetic practice and was used by such well-known surveyors as A.V. Zubov. [39,40], V.A. Kougia [37,41], B.T. Mazurov [42], S.G. Shnitko [43] and other Russian [1,44,45] and foreign [46,47] specialists. The main advantage is that the algorithm does not use second derivatives.
The conjugate gradient method is a kind of continuation of the development of the steepest descent method, which combines two concepts: the gradient of the objective function and the conjugate direction of vectors. The main iterative formula of the method is written in the Formula (12):
x k + 1 = x k λ k P k ,
where P k is the unit vector of conjugate directions; λ k is the length of the movement step at each iteration.
At the zero iteration, the unit vector P k is taken to be equal to P 0 = f ( x 1 , , x n ) . In subsequent calculations, the vector P k can be calculated using the Formula (13):
P k = f ( x 1 , , x n ) k + β k P k 1 ,
where β k is the weighting factor that is used to determine the conjugate directions.
The weighting factor β k can be determined using the Fletcher–Reeves Formula (14):
β k = | f ( x 1 , , x n ) k | 2 | f ( x 1 , , x n ) k 1 | 2 .
According to the presented formulas, the new conjugate direction is obtained by adding the antigradient at the turning point and the previous direction of movement, multiplied by a coefficient β k . Thus, the conjugate gradient method creates a search direction, to the optimal value using the information about the search obtained at the previous stages of the descent.
It is worth noting that the works [27,36] noted the benefit of restarting the algorithmic procedure every n + 1 steps ( n is the number of parameters to be determined). A restart of the computational procedure is necessary in order to “erase” the last direction of the search and start the search algorithm anew in the direction of the fastest descent.
As noted above, the value of step λ k affects the performance of the method. The step size in the conjugate gradient method is selected from the condition of the minimum objective function in the direction of motion, that is, as a result of solving the problem of one-dimensional optimization in the direction of the antigradient.

2.2. Geodetic Data for Solving the Task

As mentioned above, the second-order Newton’s method was applied to equalize the trilateration network; the network configuration is shown in Figure 1.
The purpose of solving the problem is to calculate the plane coordinates of points 1–5. To determine the coordinates of the points, an objective function was compiled, with the restrictive condition for minimizing the sum of the squares of the Formula (15):
f ( z ) = i = 1 n [ p i ( S c i S m i ) 2 ] ,
where n is the number of measured distances between points, p i is weights of the measured sides, S c i is the vector of calculated distances, S m i is the vector of measured distances, z is the objective function argument.
Vector S c i elements are calculated by the Formula (16):
S c = ( X E X S ) 2 + ( Y E Y S ) 2 ,
where X E , Y E are the coordinates of the end point of the side, X S , Y S are the coordinates of the starting point of the side.
Traditionally, surveying and geodetic networks are adjusted using a parametric method. The essence of this method is:
(1)
drawing up parametric communication equations;
(2)
linearization of these equations by expanding into a Taylor series taking into account only first-order derivatives;
(3)
solution of the obtained systems of equations based on the least squares method.
As can be seen, the traditional method of equalization does not allow the use of objective functions other than the least squares method. Using nonlinear programming methods, such a possibility appears. Therefore, the network equalization (Figure 1) was also performed using the objective function, which is the minimum of the sum of the modules of the distance corrections. This objective function is expressed by the Formula (17):
f ( z ) = i = 1 n [ p i | S c i S m i | ] .
The coordinates of the starting points are presented in Table 1.
The lengths of lines are presented in Table 2. Due to the uniformity of the measurements, the weights p i of all measured sides were taken as equal to one.

3. Results

Applying the second-order Newton’s method to find the minimum of the objective function (5), the search process was carried out in the MathCAD15 environment. A part of the program is shown in Figure 2.
The preliminary values of the coordinates of the points, as well as the data obtained during the iterative process when using two methods, are presented in Table 3.
The values of the preliminary coordinates were taken specifically far from the minimum point of the objective function (15). This was done to test the main advantage of the second-order Newton’s method, namely, the small dependence of the convergence of the method on the preliminary values of the determined parameters, in comparison with gradient methods. The preliminary values of the parameters are presented in Table 4. Table 4 also presents the data obtained during the study.
The use of nonlinear programming methods makes it possible to perform equalization, not only using the objective function of the least squares method (15) but also using the objective function of the least modulus method (17). The data obtained using the new objective function are presented in Table 5.

4. Discussion

As can be seen from Table 3, the coordinates of the determined points of the network were obtained in three approximations, using the second-order Newton’s method. The criterion for stopping the search process was the value ε . When solving the problem ε , it was taken as equal to 0.001 m. Figure 3 shows a simplified visualization of an iterative process that was performed using two methods. After analyzing Table 3 and Figure 3, we can conclude that Newton’s method is the most efficient in comparison with the gradient method.
When using Newton’s method, the iterative process is not built according to a linear law, as in the gradient method. The use of second-order partial derivatives allows us to talk about a quadratic approximation of the objective function, which in turn reduces the number of iterations. From Figure 3, it can be seen that, when approaching the minimum point of function (15), the size of the iteration step in the gradient method decreases, which in turn increases the number of calculations.
According to the data presented in Table 3, it can be seen that the values of the coordinates of the points calculated by two different methods differ. This can be explained by the fact that, in the course of linearization according to Newton’s method, the second partial derivatives of the function are used, which are responsible for the concavity of the function and allow the smallest value along the curve to be found. While, for the gradient method to work, only the values of the first derivatives are required (geometrically, this is a tangent), so the minimum value can only be found at the ends of a straight line, not along it.
As mentioned above, the main advantage of Newton’s method is the lesser dependence of the convergence of the method on the choice of preliminary values of the sought parameters, in comparison with the gradient methods. The coordinates of the network points were calculated using preliminary values that were set specifically far from the minimum point. Table 4 shows the data obtained during the iterative process. The number of iterations has increased for two methods, but the Newton’s method has an order of magnitude less than the gradient method. The discrepancy (more than 100 m) of the preliminary values and the values of the obtained coordinates still affected the accuracy of obtaining the latter, since the value of the objective function increased.
The use of nonlinear programming methods made it possible to calculate the coordinates of points not only using the objective function (15), but also using the objective function of the method of least modules (17). The data obtained during the execution of the iterative process are presented in Table 5. It is worth noting that, despite the close location of the preliminary values to the minimum point, the number of iterations increased in the two methods, compared to the options when the objective function was used (15). The function values have also increased. Analyzing the data presented in Table 5, we can say that, using the gradient method, most likely the point of a local minimum was determined, since the value of the objective function is large enough.
Today it is necessary to develop an algorithm, the use of which allows the user to obtain a correct answer with high accuracy in a short period of time and without taking into account the influence of the preliminary values of the determined parameters. The second-order Newton’s method has such resources, due to the use of the second partial derivatives of the objective function, the speed of solving the problem is higher, with a smaller number of approximations compared to the methods of the first-order.
However, in the course of a computational experiment, it was found that this method does not give the correct solution for all preliminary values of the parameters, sometimes the method simply does not work. This is primarily due to the fact that the Hessian matrix indicates the direction of decreasing the function, only if it is positive definite. Therefore, the user needs to prepare the problem for the solution, namely, to calculate the preliminary values, taking into account that they do not make the Hessian matrix negative. If this condition is not followed, then the method may diverge and the method loses its main advantage—the speed of the solution. Using only direct search methods expands the range of selection of preliminary values, since these methods have no restrictions on the sign of derivatives, since derivatives are not used in the iterative process; however, it is necessary to set more conditions for calculating different values of the objective function, which complicates the search process and increases the search time.
The authors of the article propose the creation of a software algorithm based on the second-order Newton’s method and on direct search methods, in particular the Powell method and the Davis–Sven–Kempy (DSK) method. The use of this software algorithm will enhance the positive aspects of the second-order Newton’s method, namely, to reduce the dependence on the preliminary values of the determined parameters. It would be convenient for the user to use an algorithm in which the number of iterations does not depend on the preliminary values of the parameters being determined. The main reason for combining the second-order Newton’s method with direct search methods is to increase the potential of the method in terms of increasing the speed of the optimization process. A combination of direct search methods, namely the DSK method and the Powell method, was used to create a modified second-order Newton’s method.
The essence of the algorithm based on a combination of the DSK method and the Powell method is as follows [48]:
  • The objective function F ( x 1 , x 2 , , x n ) , depending on the parameters x 1 , x 2 , , x n to be determined, is set.
  • The preliminary value of the parameter x 1 and the increment step Δ x 1 are set.
  • The increment Δ x 1 is added and subtracted only to the first parameter x 1 , the rest of the parameters are also given preliminary values, but they remain unchanged.
  • The values of the objective function are calculated with the changed parameters F ( x 1 Δ x 1 , x 2 , , x n ) and F ( x 1 + Δ x 1 , x 2 , , x n )
  • The new value of the determined parameter is calculated by the Formula (18):
    x 1 * = x 1 + Δ x 1 ( F ( x 1 Δ x 1 , x 2 , , x n ) F ( x 1 + Δ x 1 , x 2 , , x n ) ) 2 ( F ( x 1 Δ x 1 , x 2 , , x n ) F ( x 1 , x 2 , , x n ) + F ( x 1 + Δ x 1 , x 2 , , x n ) ) .
  • The next parameter x 2 is changed and the new value of the function is calculated, only the value x 1 * is substituted into the target function instead of the parameter x 1 .
In general, this method may require a sufficiently large number of iterations to find the optimal solution. However, its main advantage is that its solution area is much larger in comparison with the classical second-order Newton’s method.
The authors of the article have developed the following algorithm to minimize the main disadvantages of the Newton’s method algorithm:
  • Step 1: The user creates an objective function F ( x 1 , x 2 , , x n ) and chooses with what constraint he/she will find the minimum of the objective function (by the method of least squares or by the method of least modules); it is recommended to use the least squares method for solving geodetic tasks;
  • Step 2: Sets any preliminary values of the parameters to be determined (it is recommended to set either previously known to true values or accept all parameters as equal to zero);
  • Step 3: using the methods of quadratic approximation, namely the Powell–DSK method, in two approximations, the preliminary values are refined according to Formula (18);
  • Step 4: The Hessian matrix is created, and its positiveness is checked; if the condition is met, then the revised preliminary values are used in the next step. If the Hessian matrix is not positive, then step 3 is performed again;
  • Step 5: The obtained refined preliminary values are used in the second-order Newton’s method, the matrix of the first derivatives and the matrix of the second derivatives are formed;
  • Step 6: An iterative process is performed according to Formula (5) until the stopping criterion is met (Formula (7)). The stopping criterion is chosen by the user;
  • Step 7: The accuracy of the obtained parameter values is evaluated. To estimate the accuracy of the obtained parameters, an inverse weight matrix is used.
When performing the equalization of surveying and geodetic measurements using nonlinear programming methods, difficulties arise in performing the accuracy assessment, since the iterative process of the first and second-order methods does not require compiling a matrix of normal equations of unknowns; therefore, it is not possible to find the inverse weight matrix Q . The assessment of the accuracy of the data obtained during the use of nonlinear programming methods was given attention in the works of G.V. Makarov. [49], V.I. Mitskevich [35,50,51,52], Ch.N. Zheltko [41]. V.I. Mitskevich notes that a fragment of the inverse matrix of weights can be obtained by the generalized method of G.V. Makarov in [49,53].
The procedure for evaluating the accuracy of the coordinates of the points of the mine surveying and geodetic network obtained by nonlinear programming methods is given in [54,55,56,57,58,59]. The accuracy estimation algorithm using nonlinear programming methods is described in detail in [60]. V.I. Mitskevich in his works [50,52,60] asserts that, when optimizing using the objective function of the least squares method according to Newton’s second-order method, to compose the inverse weight matrix, one can use the Hessian matrix according to Formula (19):
Q = 2 H 1 = 2 ( 2 f ( x 1 , , x n ) x 1 x 1 2 f ( x 1 , , x n ) x 1 x 2 2 f ( x 1 , , x n ) x 1 x n 2 f ( x 1 , , x n ) x 2 x 1 2 f ( x 1 , , x n ) x 2 x 2 2 f ( x 1 , , x n ) x 2 x n 2 f ( x 1 , , x n ) x n x 1 2 f ( x 1 , , x n ) x n x 1 2 f ( x 1 , , x n ) x n x n ) 1 .
In the article, to assess the accuracy of the obtained values of the parameters determined by the second-order Newton’s method, the inverse weight matrix will be compiled according to Formula (19).
The modified Newton’s method was applied with preliminary values of the coordinates of the determined points, at which the classical second-order Newton’s method does not work, since it diverges. The data obtained during the iterative process are presented in Table 6. Table 6 also presents data that make it possible to assess the accuracy of the obtained coordinate values, namely, the root-mean-square errors of the coordinates of the item being determined and the mean-square error of the unit of weight.
The Modified second-order Newton method should also be compared with the Broyden–Fletcher–Goldfarb–Shanno method (BFGS). It should be noted that, in contrast to the classical second-order Newton method, the Hessian matrix is not calculated directly in quasi-Newtonian methods, that is, there is no need to find second-order partial derivatives. Instead, the Hessian is calculated approximately from the previous approximations. One of the most effective quasi-Newtonian methods is the BFGS method; the advantage of this algorithm is the simplicity of software implementation. However, the main disadvantage of using this method is the increase in the number of iterations to find the minimum of the objective function. This disadvantage can be mitigated by the fact that, due to the performance of modern computers when solving simple optimization problems, the increase in the number of approximations does not become noticeable in time for the user. The data obtained during the iterative process are presented in Table 7.

5. Conclusions

It should be noted that the methods that use derivatives of higher orders in the search process have a wide range of applications in geodesy and surveying. To test the possibility of implementing the second-order Newton’s method in surveying and geodetic production, the trilateration network was equalized. In the course of solving this problem, the main advantages of the method were confirmed; namely, a high convergence rate (compared to methods using first-order derivatives) and the possibility of using rough values of preliminary parameters for an iterative process. On the other hand, the main disadvantages of the method were also confirmed: it is a highly complex computational process (formation of the Hessian matrix and control of its sign). To reduce the influence of the disadvantages of this method, a software algorithm was created based on the second-order Newton’s method and on direct search methods, in particular the Powell method and the Davis–Sven–Kempy (DSK) method. The use of this software algorithm will enhance the positive aspects of the second-order Newton’s method; namely, to reduce the dependence on the preliminary values of the determined parameters. It would be convenient for the user to use an algorithm in which the number of iterations does not depend on the preliminary values of the parameters being determined. The main reason for combining the second-order Newton’s method with direct search methods is to increase the potential of the method in terms of increasing the speed of the optimization process. The prospect of further research is to expand the range of problems to be solved by a modified second-order Newton’s method and to study its efficiency and productivity in new conditions.

Author Contributions

Conceptualization, investigation, methodology, and software, D.B. Formal analysis, writing—review and editing, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research was carried out at the expense of a subsidy for the implementation of the state task in the field of scientific activity for 2021 № FSRW-2020-0014.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abu, D.I. Mathematical Processing and Analysis of the Accuracy of Ground Spatial Geodetic Networks by Methods of Nonlinear Programming and Linear Algebra. Ph.D. Thesis, Polotsk State University, Novopolotsk, Belarus, 1998; 142p. [Google Scholar]
  2. Nikonov, A.; Kosarev, N.; Solnyshkova, O.; Makarikhina, I. Geodetic base for the construction of ground-based facilities in a tropical climate. In Proceedings of the E3S Web of Conferences; EDP Sciences: Les Ulis, France, 2019; Volume 91, p. 7019. [Google Scholar] [CrossRef]
  3. Liu, B.; Wei, Y.; Zhi, S.; Zhao, W.; Lin, J. Optimization of location of robotic total station in 3D deformation monitoring of multiple points. In Information Technology in Geo-Engineering; Springer Series in Geomechanics and Geoengineering; Springer: Cham, Switzerland, 2018; pp. 730–737. [Google Scholar] [CrossRef]
  4. Liu, G.H. Recovering 3D shape and motion from image sequences using affine approximation. In Proceedings of the 2009 Second International Conference on Information and Computing Science, Manchester, UK, 21–22 May 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 2, pp. 349–352. [Google Scholar] [CrossRef]
  5. Suzuki, T. Position and attitude estimation by multiple GNSS receivers for 3D mapping. In Proceedings of the 29th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS + 2016), Portland, OR, USA, 12–16 September 2016; Volume 2, pp. 1455–1464. [Google Scholar]
  6. Vasilieva, N.V.; Boykov, A.V.; Erokhina, O.O.; Trifonov, A.Y. Automated digitization of radial charts. J. Min. Inst. 2021, 247, 82–87. [Google Scholar] [CrossRef]
  7. Kopylova, N.S.; Starikov, I.P. Methods of displaying geospatial information using cartographic web technologies for the arctic region and the continental shelf. Geod. Kartogr. 2021, 971, 15–22. [Google Scholar] [CrossRef]
  8. Kovyazin, V.F.; Lepikhina, O.Y.; Zimin, V.P. Prediction of Cadastral Value of Land in a Single-Industry Town by the Regression Model; Bulletin of the Tomsk Polytechnic University; Kovyazin, V.F., Ed.; Tomsk Polytechnic University, Geo Assets Engineering: Tomsk Oblast, Russia, 2017; Volume 328, pp. 6–13. [Google Scholar]
  9. Rybkina, A.M.; Demidova, P.M.; Kiselev, V.A. Analysis of the application of deterministic interpolation methods for land cadastral valuation of low-rise residential development of localities. Int. J. Appl. Eng. Res. 2017, 12, 10834–10840. [Google Scholar]
  10. Rybkina, A.M.; Demidova, P.M.; Kiselev, V. A working-out of the geostatistical model of mass cadastral valuation of urban lands evidence from the city Vsevolozhsk. Int. J. Appl. Eng. Res. 2016, 11, 11631–11638. [Google Scholar]
  11. Kuzin, A.A.; Kovshov, S.V. Accuracy evaluation of terrain digital models for landslide slopes based on aerial laser scanning results. Ecol. Environ. Conserv. 2017, 23, 908–914. [Google Scholar]
  12. Pravdina, E.A.; Lepikhina, O.J. Laser scanner data capture time management. ARPN J. Eng. Appl. Sci. 2017, 12, 1649–1661. [Google Scholar]
  13. Demenkov, P.A.; Goldobina, L.A.; Trushko, O.V. Geotechnical barrier options with changed geometric parameters. Int. J. GEOMATE 2020, 19, 58–65. [Google Scholar] [CrossRef]
  14. Demenkov, P.A.; Goldobina, L.A.; Trushko, V.L. The implementation of building information modeling technologies in the training of bachelors and masters at Saint Petersburg mining University. ARPN J. Eng. Appl. Sci. 2020, 15, 803–813. [Google Scholar]
  15. Goldobina, L.A.; Demenkov, P.A.; Trushko, O.V. Ensuring the safety of construction and installation works during the construction of buildings and structures. J. Min. Inst. 2019, 239, 583–595. [Google Scholar] [CrossRef] [Green Version]
  16. Katuntsov, E.; Kosarev, O. Correlation processing of radar signals with multilevel quantization. Res. J. Appl. Sci. 2016, 11, 624–627. [Google Scholar]
  17. Kochneva, A.A.; Kazantsev, A.I. Justification of quality estimation method of creation of digital elevation models according to the data of airborne laser scanning when designing the motor ways. J. Ind. Pollut. Control. 2017, 33, 1000–1006. [Google Scholar]
  18. Karavaichenko, M.G.; Gazaleev, L.I. Numerical modeling of a double-walled spherical reservoir. J. Min. Inst. 2020, 245, 561–568. [Google Scholar] [CrossRef]
  19. Gusev, V.N.; Maliukhina, E.M.; Volokhov, E.M.; Tyulenev, M.A.; Gubin, M.Y. Assessment of development of water conducting fractures zone in the massif over crown of arch of tunneling (construction) climate. Int. J. Civ. Eng. Technol. 2019, 10, 635–643. [Google Scholar]
  20. Ivanik, S.A.; Ilyukhin, D.A. Hydrometallurgical technology for gold recovery from refractory gold-bearing raw materials and the solution to problems of subsequent dehydration processes. J. Ind. Pollut. Control. 2017, 33, 891–897. [Google Scholar]
  21. Pan, G.; Zhou, Y.; Guo, W. Global optimization algorithm in 3D datum transformation of industrial measurement. Geomat. Inf. Sci. Wuhan Univ. 2013, 39, 85–89. [Google Scholar] [CrossRef]
  22. Kozak, P.M.; Lapchuk, V.P.; Kozak, L.V.; Ivchenko, V.M. Optimization of video camera disposition for the maximum calculation precision of coordinates of natural and artificial atmospheric objects in stereo observations. Kinemat. Phys. Celest. Bodies 2018, 34, 313–326. [Google Scholar] [CrossRef]
  23. Easa, S.M. Survey review space resection in photogrammetry using collinearity condition without linearization. Surv. Rev. 2010, 42, 40–49. [Google Scholar] [CrossRef]
  24. Shevchenko, G.G. About adjustment of spatial geodetic networks by the search method. Geod. Cartogr. 2019, 80, 10–20. [Google Scholar] [CrossRef]
  25. Shevchenko, G.G.; Bryn, M.Y. Adjustments of Correlated Values by Search Method. In IOP Conference Series Materials Science and Engineering; CATPID-2019; IOP Publishing: Kislovodsk, Russia, 2019; Volume 698, p. 44019. [Google Scholar] [CrossRef]
  26. Maksimov, Y.A.; Fillipovskaya, E.A. Algorithms for Solving Nonlinear Programming Problems; M. MEPhI: Moscow, Russia, 1982; 52p. [Google Scholar]
  27. Himmelblau, D.M. Applied Nonlinear Programming; McGraw-Hill: New York, NY, USA, 1972; 532p. [Google Scholar]
  28. Yan, J.; Tiberius, C.; Bellusci, G.; Janssen, G. Feasibility of Gauss-Newton method for indoor positioning. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 6–8 May 2008; pp. 660–670. [Google Scholar]
  29. Vasil’ev, A.S.; Goncharov, A.A. Special strategy of treatment of difficulty-profile conical screw surfaces of single-screw compressors working bodies. J. Min. Inst. 2019, 235, 60–64. [Google Scholar] [CrossRef]
  30. Dennis, D. Numerical Methods of Unconstrained Optimization and Solution of Nonlinear Equations; Applied Mathematics: Philadelphia, PA, USA, 1988; 440p. [Google Scholar]
  31. Kantorovich, L.V. On the Newton’s Method; Works of the V.A. Steklov Mathematic Institute: Moscow, Russia, 1949; Volume 28, pp. 104–144. [Google Scholar]
  32. Shnitko, S.G. Algorithms for Equalization and Estimating the Accuracy of Geodetic Networks by Nonlinear Methods; PSU Bulletin; State College: Harrisburg, PA, USA, 2012; Volume 8, pp. 133–135. [Google Scholar]
  33. Ortega, D. Iterative Methods for Solving Nonlinear Systems of Equations with Many Unknowns; Mir: Moscow, Russia, 1975; 558p. [Google Scholar]
  34. Bykasov, D.A.; Zubov, A.V. Application of Newton’s Method of the Second-Order in Solving Surveying and Geodetic Problems; MineSurveying Bulletin: Moscow, Russia, 2020; Volume 5, pp. 22–26. [Google Scholar]
  35. Stroner, M.; Michal, O.; Enhanced, R. Maximal precision increment method for network measurement optimization. In Advances and Trends in Geodesy, Cartography and Geoinformatics; CRC Press: Boca Raton, FL, USA, 2018; pp. 101–106. [Google Scholar] [CrossRef]
  36. Sviridenko, A.B. A priori correction in Newton’s optimization methods. Comput. Res. Model. 2015, 7, 835–863. [Google Scholar] [CrossRef]
  37. Gubaydullina, R.; Kornilov, Y.N. The application of similarity theory elements in geodesy. In Topical Issues of Rational Use of Natural Resources; CRC Press: Boca Raton, FL, USA, 2019; Volume 1, pp. 183–188. [Google Scholar] [CrossRef]
  38. Nemirovsky, A.S.; Yudin, D.B. Information Complexity and Efficiency of Optimization Methods; John Wiley and Sons: Hoboken, NJ, USA, 1976; 105p. [Google Scholar]
  39. Zubov, A.V.; Pavlov, N.S. Assessment of the Stability of Support and Deformation Surveying and Geodetic Networks; Surveyor Bulletin: Moscow, Russia, 2013; Volume 2, pp. 21–23. [Google Scholar]
  40. Zubov, A.V.; Pavlov, N.S. The use of the gradient method in solving geodetic problems. In Proceedings of the Interuniversity Scientific-Practical Conference. SPb. A.F. Mozhaysky Military Space Academy, Saint Petersburg, Russia, 20 September 2013; pp. 90–93. [Google Scholar]
  41. Zheltko, C.N. Search Method of Equalization and Estimation of the Accuracy of Unknowns in the Least Squares Method: Monograph; FSBEI HE “KubSTU”: Krasnodar, Russia, 2016; 103p. [Google Scholar]
  42. Mazurov, B.T. Mathematical Modeling in the Study of Geodynamics; Sibprint Agency: Novosibirsk, Russia, 2019; 360p. [Google Scholar]
  43. Shu, C.; Li, F.; Wang, S. Improving algorithm to compute geodetic coordinates. In Proceedings of the 2008 International Workshop on Education Technology and Training & 2008 International Workshop on Geoscience and Remote Sensing, Shanghai, China, 21–22 December 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 2, pp. 340–343. [Google Scholar]
  44. Baran, P.I. Investigation of the accuracy of solving geodetic problems by methods of mathematical programming. Eng. Geod. 1987, 30, 5–8. [Google Scholar]
  45. Budo, A.Y. Comparative Analysis of the Equalization Results Obtained Using Two Polar Methods when Processing Planned Geodetic Networks; PSU Bulletin; State College: Harrisburg, PA, USA, 2010; Volume 12, pp. 115–122. [Google Scholar]
  46. Sholomitskii, A.; Lagutina, E. Calculation of the accuracy of special geodetic and mine surveying networks. ISTC Earth Sci. 2019, 272, 10. [Google Scholar] [CrossRef]
  47. Men’Shikov, S.N.; Dzhaljabov, A.A.; Vasiliev, G.G.; Leonovich, I.A.; Ermilov, O.M. Spatial models developed using laser scanning at gas condensate fields in the northern construction-climatic zone. J. Min. Inst. 2019, 238, 430–437. [Google Scholar] [CrossRef] [Green Version]
  48. Kougia, V.A.; Kanashin, N.V. Determination of the connection elements between three-dimensional coordinate systems by the gradient method. In News of Higher Educational Institutions; Geodesy and Aerial Photography: Moscow, Russia, 2008; Volume 2, pp. 22–28. [Google Scholar]
  49. Makarov, G.V.; Khudyakov, G.I. Use of affinne coordinate conversion at the local geodetic surveys with applying of GPS-receivers. J. Min. Inst. 2013, 204, 15–18. [Google Scholar]
  50. Mitskevich, V.I.; Abu, D.I. Estimation of the accuracy of spatial serifs by nonlinear programming methods. Geod. Cartogr. 1994, 1, 22–24. [Google Scholar]
  51. Mitskevich, V.I. Mathematical Processing of Geodetic Networks by Nonlinear Programming Methods; PSU: Novopolotsk, Russia, 1997; 64p. [Google Scholar]
  52. Mitskevich, V.I.; Yaltyhov, V.V. Equalization and assessment of the accuracy of geodetic serifs under various criteria of optimality. Geod. Cartogr. 1994, 7, 14–16. [Google Scholar]
  53. Shevchenko, G.G. Development of technology for geodetic monitoring of buildings and structures by the method of free stationing using the search method of nonlinear programming. Ph.D. Thesis, Emperor Alexander I St. Petersburg State Transport University SPb, Saint Petersburg, Russia, 2020; 212p. [Google Scholar]
  54. Eliseeva, N.N. Application of search methods in solving nonlinear optimization problems. In Collection of Materials of the XIV International Scientific-Practical Conference Dedicated to the 25th Anniversary of the Constitution of the Republic of Belarus “Modernization of the Economic Mechanism through the Prism of Economic, Legal, Social and Engineering Approaches”; BSTU: Minsk, Belarus, 2019; pp. 364–369. [Google Scholar]
  55. Zelenkov, G.A.; Khakimova, A.B. Approach to the development of algorithms for Newton’s optimization methods, software imple-mentation and comparison of efficiency. Comput. Res. Model. 2013, 5, 367–377. [Google Scholar] [CrossRef] [Green Version]
  56. Mikheev, S.E. Convergence of Newton’s method on various classes of functions. Comput. Technol. 2005, 10, 72–86. [Google Scholar]
  57. Chen, C.; Bian, S.; Li, S. An Optimized Method to Transform the Cartesian to Geodetic Coordinates on a Triaxial Ellipsoid; Chen, C., Ed.; Studia Geophysica et Geodaetica: Prague, Czech Republic, 2019; Volume 63, pp. 367–389. [Google Scholar]
  58. Kazantsev, A.I.; Kochneva, A.A. Ground of the geodesic control method of deformations of the land surface when protecting the buildings and structures under the conditions of urban infill. Ecol. Environ. Conserv. 2017, 23, 876–882. [Google Scholar]
  59. Kuzin, A.A.; Valkov, V.A.; Kazantsev, A.I. Calibration of digital non-metric cameras for measuring works. JP Conf. Ser. 2018, 1118, 012022. [Google Scholar] [CrossRef] [Green Version]
  60. Mitskevich, V.I.; Yaltyhov, V.V. Peculiarities of equalization of geodetic networks by the method of least modules. Geod. Cartogr. 1997, 5, 23–24. [Google Scholar]
Figure 1. Network configuration.
Figure 1. Network configuration.
Computation 09 00131 g001
Figure 2. Implementation of the second-order Newton’s method in MathCAD15.
Figure 2. Implementation of the second-order Newton’s method in MathCAD15.
Computation 09 00131 g002
Figure 3. Implementation of the second-order Newton’s method in MathCAD15.
Figure 3. Implementation of the second-order Newton’s method in MathCAD15.
Computation 09 00131 g003
Table 1. The coordinates of the starting points.
Table 1. The coordinates of the starting points.
ItemCoordinates
X , m Y , m
A645.112426.229
B1028.568857.277
C740.3391333.496
Table 2. Line lengths.
Table 2. Line lengths.
No.Line NameLength, m
1C–2492.886
2B–2448.178
3A–2445.726
4A–3512.201
53–2504.961
62–4733.414
72–1523.911
81–C534.601
93–1654.977
103–4482.249
114–1456.648
125–A617.706
135–3322.978
145–4700.240
Table 3. Data for comparing the two methods with close setting of preliminary parameter values (objective function by the method of least squares).
Table 3. Data for comparing the two methods with close setting of preliminary parameter values (objective function by the method of least squares).
ItemPreliminary
Coordinates
Calculated Coordinates
Second-Order Newton’s MethodConjugate Gradient Method
X , m Y , m X , m Y , m X , m Y , m
1210.0001235.000213.7361241.368213.7631241.430
2575.000860.000580.501867.247580.515867.261
3150.000580.000159.346588.653159.363588.730
4−135.000950.000−146.870961.206−146.851961.283
540.000285.00043.240287.26643.042287.478
NoI 1 3389
OF 2859.468 1.318 × 10 7 2.769 × 10 2
CT 3 25.5 s48.8 s
1 Number of iterations. 2 Objective function value. 3 Computational time.
Table 4. Data for comparing the two methods with rough presetting (objective function according to the method of least squares).
Table 4. Data for comparing the two methods with rough presetting (objective function according to the method of least squares).
ItemPreliminary
Coordinates
Calculated Coordinates
Second-Order Newton’s MethodConjugate Gradient Method
X , m Y , m X , m Y , m X , m Y , m
110.00010.000213.7371241.370213.5171242.222
210.00010.000580.501867.248580.685867.828
310.00010.000159.347588.656159.880589.763
410.00010.000−146.869961.209−146.365962.073
510.00010.00043.237287.27243.290289.190
NoI 1 11586
OF 2859.468 2.245 × 10 5 2.413
CT 3 42.5 s59.8 s
1 Number of iterations. 2 Objective function value. 3 Computational time.
Table 5. Data for comparing the two methods with precise presetting values (objective function according to the method of least modulus).
Table 5. Data for comparing the two methods with precise presetting values (objective function according to the method of least modulus).
ItemPreliminary
Coordinates
Calculated Coordinates
Second-Order Newton’s MethodConjugate Gradient Method
X , m Y , m X , m Y , m X , m Y , m
1210.0001235.000213.7361241.367213.7621241.379
2575.000860.000580.500867.246580.515867.246
3150.000580.000159.346588.652159.352589.665
4−135.000950.000−146.869961.205−146.406961.591
540.000285.00043.231287.27543.300289.085
NoI 1 1031233
OF 2859.468 1.047 × 10 3 1.025
CT 3 49.1 s100.1 s
1 Number of iterations. 2 Objective function value. 3 Computational time.
Table 6. The data obtained during the equalization of the trilateration network using the modified second-order Newton’s method.
Table 6. The data obtained during the equalization of the trilateration network using the modified second-order Newton’s method.
ItemPreliminary
Coordinates
Calculated Coordinates
Modified Second-Order Newton’s Method
X , m Y , m X , m Y , m
10.0000.000213.7361241.368
20.0000.000580.501867.247
30.0000.000159.346589.653
40.0000.000−146.870961.206
50.0000.00043.240287.266
NoI 128
OF 2 5.730 × 10 4
CT 398.7 s
RS 4 m x 1 / m y 1 , mm 1.047 × 10 3 / 5.180 × 10 4
m x 2 / m y 2 , mm 4.080 × 10 4 / 3.980 × 10 4
m x 3 / m y 3 , mm 5.090 × 10 4 / 5.300 × 10 4
m x 4 / m y 4 , mm 5.400 × 10 4 / 6.870 × 10 4
m x 5 / m y 5 , mm 6.56 × 10 4 / 7.84 × 10 4
1 Number of iterations. 2 Objective function value. 3 Root mean square errors of coordinates of the determined point. 4 Computational time.
Table 7. The data obtained during the equalization of the trilateration network using the modified second-order Newton’s method and the Broyden–Fletcher–Goldfarb–Shanno method.
Table 7. The data obtained during the equalization of the trilateration network using the modified second-order Newton’s method and the Broyden–Fletcher–Goldfarb–Shanno method.
ItemPreliminary
Coordinates
Calculated Coordinates
Second-Order Newton’s MethodBFGS
X , m Y , m X , m Y , m X , m Y , m
10.0000.000213.7361241.368288.8061070.175
20.0000.000580.501867.247652.085777.799
30.0000.000159.346589.653201.616380.299
40.0000.000−146.870961.206−63.580810.951
50.0000.00043.240287.26671.94439.043
NoI 1 28144
OF 2859.468 5.730 × 10 4 44.863 × 10 3
CT 3 98.7 s122.1 s
1 Number of iterations. 2 Objective function value. 3 Computational time.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mustafin, M.; Bykasov, D. Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods. Computation 2021, 9, 131. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120131

AMA Style

Mustafin M, Bykasov D. Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods. Computation. 2021; 9(12):131. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120131

Chicago/Turabian Style

Mustafin, Murat, and Dmitry Bykasov. 2021. "Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods" Computation 9, no. 12: 131. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop