Next Article in Journal
Review of Forty Years of Technological Changes in Geomatics toward the Big Data Paradigm
Previous Article in Journal
A New Simplification Approach Based on the Oblique-Dividing-Curve Method for Contour Lines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algebraic and Geometric Characterizations of Double-Cross Matrices of Polylines

Databases and Theoretical Computer Science Research Group, Hasselt University and Transnational University of Limburg, Agoralaan, Building D, 3590 Diepenbeek, Belgium
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2016, 5(9), 152; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5090152
Submission received: 4 July 2016 / Revised: 7 August 2016 / Accepted: 18 August 2016 / Published: 27 August 2016

Abstract

:
We study the double-cross matrix descriptions of polylines in the two-dimensional plane. The double-cross matrix is a qualitative description of polylines in which exact, quantitative information is given up in favour of directional information. First, we give an algebraic characterization of the double-cross matrix of a polyline and derive some properties of double-cross matrices from this characterisation. Next, we give a geometric characterization of double-cross similarity of two polylines, using the technique of local carrier orders of polylines. We also identify the transformations of the plane that leave the double-cross matrix of all polylines in the two-dimensional plane invariant.

1. Introduction and Summary of Results

Polylines arise in Geographical Information Science (GIS) in a multitude of ways. One recent example comes from the collection of moving object data, where trajectories of moving persons (or animals), that carry GPS-equipped devices, are collected in the form of time-space points that are registered at certain (ir) regular moments in time. The spatial trace of this movement is a collection of points in two-dimensional space. There are several methods to extend the trajectory in between the measured sample points, of which linear interpolation is a popular method [1]. The resulting curve in the two-dimensional geographical space is a polyline.
Another example comes from shape recognition and retrieval, which arises in domains, such as computer vision, image analysis and GIS, in general. Here, closed polylines (where the starting point coincides with the end point) or polygons, often occur as the boundary of two-dimensional shapes or regions. Shape recognition and retrieval are central problems in this context.
In examples, such as the above, there are, roughly speaking, two very distinct approaches to deal with polygonal curves and shapes. On the one hand, there are the quantitative approaches and on the other hand there are the qualitative approaches. Initially, most research efforts have dealt with the quantitative methods [2,3,4,5]. Only afterwards, the qualitative approaches have gained more attention, mainly supported by research in cognitive science that provides evidence that qualitative models of shape representation are much more expressive than their quantitative counterpart and reflect better the way in which humans reason about their environment [6]. Polygonal shapes and polygonal curves are very complex spatial phenomena and it is commonly agreed that a qualitative representation of space is more suitable to deal with them [7].
Within the qualitative approaches to describe two-dimensional movement or shapes, two major approaches can be distinguished: the region-based and the boundary-based approach. The region-based approach, using concepts such as circularity, orientation with respect to the coordinate axis, can only distinguish between shapes with large dissimilarities [8]. The boundary-based, using concepts such as extremes in and changes of curvature of the polyline representing the shape, gives more precise tools to distinguish polylines and polygons. Examples of the boundary-based approaches are found in [7,8,9,10,11,12,13].
The principles behind qualitative approaches to deal with polylines are also related to the field of spatial reasoning. Spatial reasoning has as one of its main objectives to present geographic information in a qualitative way to be able to reason about it (see, for example, Chapter 12 in [14]), also for spatio-temporal reasoning) and it can be seen as the processing of information about a spatial environment that is immediately available to humans (or animals) through direct observation. The reason for using a qualitative representation is that the available information is often imprecise, partial and subjective [15].
Qualitative spatial reasoning is a part of the broader field of spatial reasoning and has as goals (1) to finitely represent knowledge about infinite spatial (and spatio-temporal) domains using a finite number of qualitative relations and (2) to design computationally feasible symbolic reasoning techniques for these data. The aim of these models is to support human understanding of space. In particular, qualitative approaches aim to reflect human cognition and they offer intuitive representations of spatial situations that are nevertheless able to enable complex decision tasks. During the past decades, the qualitative spatial reasoning community has investigated several qualitative representations of spatial knowledge and reasoning techniques and their underlying computational principles, linking formal approaches to cognitive theories. For an overview of the aims and methods in this field, we refer to [16] and [17].
In the field of qualitative spatial reasoning several taxonomies of cognitive spatial (and spatio-temporal) concepts have been proposed and frameworks have been designed to accommodate computational techniques, useful for applications. Dozens of formalisms, in the form of, so-called, qualitative spatial calculi have been proposed (a representative sample is discussed in [18]). Recent examples are [19,20].
Only during the last decade, the field of spatial reasoning has focussed more on the applications of the developed techniques, for instance, in the area of GIS. This has lead to the insights that semantics plays an important role and that qualitative spatial reasoning is more than constraint-based reasoning. During the last years, also a number of toolboxes have been developed to incorporate qualitative spatial reasoning techniques in application domains such as computer-aided design, natural language processing, robot navigation [21] and GIS. We mention the systems SparQ [22], GQR [23] and QAT [24]. As an other example from GIS, qualitative calculi have also been used to analyse trajectory data of moving objects [25].
If we return to the example of trajectory data, we can see that if a person orients her- or himself at a certain location in a city and then moves away from this location, she or he remembers her or his current location by using a mental map that takes the relative turns into account with respect to the original starting point, rather than the precise metric information about her or his trajectory. For such navigational problems, a person will for instance remember: “I left the station and went straight; passing a church to my right; then taking two left turns; ...”
One of the formalisms to qualitatively describe polylines in the plane is given by the double-cross calculus. In this method, a double-cross matrix captures the relative position of any two line segments in a polyline by describing it with respect to a double cross based on the starting points of these line segments. The double-cross calculus was introduced as a formalism to qualitatively represent a configuration of vectors in the plane [15,26]. In this formalism, the relative position (or orientation) of two (located) vectors is encoded by means of a 4-tuple, whose entries come from the set { 0 , + , } . Such a 4-tuple expresses the relative orientation of two vectors with respect to each other. The double-cross formalism is used, for instance, in the qualitative trajectory calculus, which, in turn, has been used to test polyline similarity with applications to query-by-sketch, indexing and classification [27]. We elaborate on these applications in Section 2.4. As an other example from GIS, the double-cross formalism have also been used for mining and analysing moving object data [25]. Recently, first steps have been made to generalise the double-cross formalism to model spatial interactions in 3-dimensional space, called Q T C 3 D , where it is used to model, for instance, flying planes. Also, the complex interactions of birds, that fly in a flock, are modelled, using this formalism. In this higher dimensional setting, the calculus is based on the transformations of the so-called Frenet-Serret frames [28]. Other applications of the double-cross formalism and its generalisation OPRA n include the modelling of traffic situations [29,30].
Two polylines are called double-cross similar if their double-cross matrices are identical. Two polylines, that are quite different from a quantitative or metric perspective, may have the same double-cross matrices, as we illustrate below. The idea is that they follow a similar pattern of relative turns, which reflects how humans visualize and remember movement patterns.
In this paper, we provide an extensive algebraic and geometric interpretation of the double-cross matrix of a polyline and of double-cross similarity of polylines. To start with, we give a collection of polynomial constraints (polynomial equalities and inequalities) on the coordinates of the measured points of a polyline (its vertices) that express the information contained in the double-cross matrix of a polyline. This algebraic characterisation can be used to effectively verify double-cross similarity of polylines and to generate double-cross similar polylines by means of tools from algebraic geometry, implemented, for instance, in software packages like Mathematica [31]. This algebraic characterization of the double-cross matrix also allows us to prove a number of properties of double-cross matrices. As an example, we mention a high degree of symmetry in the double-cross matrix along its main diagonal.
Next, we turn to a geometrical interpretation of double-cross similarity of two polylines. This geometrical interpretation is based on local geometric information of the polyline in its vertices. This information is called the local carrier order and it uses (local) compass directions in the vertices of a polyline to locate the relative position of the other vertices. Our main result in this context says that two polylines are double-cross similar if and only if they have the same local carrier order structure.
From the definition of the doubtle-cross matrix of a polyline it is clear that this matrix remains the same if, for instance, we translate or rotate the polyline in the two-dimensional plane. In a final part of this paper, we identify the exact set of transformations of the two-dimensional plane that leave double-cross matrices invariant. Our main (and rather technical) result states that the largest group of transformations of the plane, that is double-cross invariant consist of the similaritiy transformations of the plane onto itself. In broad terms, the similarities of the plane are the translations, rotations and homotecies (scalings) of the plane. This result allows us, for instance, to prove any statement about double-cross matrices of a polyline, only for polylines start in the origin of the two-dimensional plane and have a unit length first line segment.

Organization 

This paper is organized as follows. Section 2 gives the definition of a polyline, the double-cross matrix of a polyline and double-cross similarity of two polylines. It also gives examples of the application of the concept of double-cross similarity in GIS. Section 3 gives our algebraic characterization of the double-cross matrix of a polyline. In Section 4, we derive a number of properties of double-cross matrices from the algebraic characterisation. In Section 5, we give a geometric characterization of the double-cross similarity of two polylines in terms of the local carrier order. And finally, in Section 6, we characterize the double-cross invariant transformations of the plane. In this section, we identify the transformations of the plane that leave the double-cross matrix of all polylines invariant.

2. Definitions and Preliminaries

In this section, we give the definitions of a polyline, of the double-cross matrix of a polyline and of double-cross similarity of two polylines.

2.1. Polylines in the Plane

Let R denote the set of the real numbers, and let R 2 denote the two-dimensional real plane. To stress that some real values are constants, we use sans serif characters: x , y , x 0 , y 0 , x 1 , y 1 , . Real variables are denoted in normal characters. For constant points of R 2 , we use the sans serif characters p , p 0 , p 1 ,
The following definition specifies what we mean by polylines. We define polylines as a finite sequences of points in R 2 (which is often used as their finite representation). When we add the line segments between consecutive points we obtain what we call the semantics of the polyline. We also introduce some terminology about polylines.
Definition 1. 
A polyline (in R 2 ) is an ordered list P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , , ( x N , y N ) of points in R 2 . We call the points ( x i , y i ) , 0 i N , the vertices of the polyline. We assume that no two consecutive vertices are identical, that is: ( x i , y i ) ( x i + 1 , y i + 1 ) , for 0 i < N .
We call N the size of the polyline P. The vertices ( x 0 , y 0 ) and ( x N , y N ) are respectively called the start and end vertex of P. The line segments connecting the points ( x i , y i ) and ( x i + 1 , y i + 1 ) , for 0 i < N , are called the (line) segments of the polyline P. The semantics of P, denoted sem ( P ) , is the union of the line segments of P.
So, the semantics, sem ( P ) , is the following union of line segments:
i = 0 N 1 ( x , y ) R 2 λ [ 0 , 1 ] : ( x , y ) = λ · ( x i , y i ) + ( 1 λ ) · ( x i + 1 , y i + 1 ) ,
which is a polygonal curve in R 2 . The quantified real variable λ captures, for values 0 < λ < 1 , the points on the line segments, strictly between the vertices ( x i , y i ) and ( x i + 1 , y i + 1 ) in sem ( P ) . Further on, we will loosely use the term polyline also to refer to the semantics of a polyline, although, in a strict sense, a polyline is a ordered list of points in R 2 .
We remark that two polylines with a different number of vertices, may have the same semantics. Figure 1 gives an example of such polylines. We also remark that the line segments, appearing in the semantics, may intersect in points which may or may be not vertices, as is illustrated by the polyline shown in Figure 2. A polyline where the start and end vertex coincide and which has no other self-intersections in its semantics is a polygon. Finally, we remark that it is reasonable to assume that polylines coming from GIS applications have vertices with rational coordinates.
Below, we stick to the notation introduced in the above definitions. Furthermore, as a standard, we abbreviate ( x i , y i ) by p i . We also use the following notational conventions. The (located) vector from p i to p j is denoted by p i p j (by the located vector from p to q , we mean an ordered pair ( p , q ) of points of R 2 ). We use this concept to represent the oriented line segment between p and q . The counter-clockwise angle (expressed in degrees) measured from p i p j to p i p k is denoted by ( p i p j , p i p k ) , as illustrated in Figure 3.

2.2. The Double-Cross Matrix of a Polyline

In this section, we define the double-cross matrix of a polyline.

2.2.1. The Double-Cross Value of Two (Located) Vectors

The double-cross calculus was introduced as a formalism to qualitatively represent a configuration of vectors in the plane R 2 [15,26]. In this formalism, the relative position (or orientation) of two (located) vectors is encoded by means of a 4-tuple, whose entries come from the set { 0 , + , } . Such a 4-tuple expresses the relative orientation of two vectors with respect to each other.
We associate to a polyline P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , , ( x N , y N ) , with p i = ( x i , y i ) , the (located) vectors p 0 p 1 , p 1 p 2 , , p N 1 p N , representing the oriented line segments between the consecutive vertices of P. Because of the assumption in Definition 1, the vectors p 0 p 1 , p 1 p 2 , , p N 1 p N all have a strictly positive length. In the double-cross formalism, the relative orientation between p i p i + 1 and p j p j + 1 is given by means of a 4-tuple
( C 1 C 2 C 3 C 4 ) { , 0 , + } 4 .
We follow the traditional notation of this 4-tuple without commas. To determine C 1 , C 2 , C 3 and C 4 , for p i p j , first of all, a double cross is defined for the vectors p i p i + 1 and p j p j + 1 , determined by the following three lines:
  • the line L i j through p i and p j ;
  • the line P i j i through p i , perpendicular on L i j ; and
  • the line P i j j through p j , perpendicular on L i j .
These three lines are illustrated in Figure 4. These three lines determine a cross at p i and a cross at p j . Hence the name “double cross.” The entries C 1 , C 2 , C 3 and C 4 express in which quadrants or on which half lines p i + 1 and p j + 1 are located with respect to the double cross.
We now define this more formally and follow the historical use of the double cross (see, for instance, [15,26]). In this definition, an interval ( a , b ) of angles, represents the open interval between a and b on the counter-clockwise oriented circle.
Definition 2. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline, with p i = ( x i , y i ) , for 0 i N , and with associated vectors p 0 p 1 , p 1 p 2 , , p N 1 p N . For p i p i + 1 and p j p j + 1 with 0 i , j < N , i j and p i p j , we define
DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 )
as follows:
C 1 = i f ( p i p j , p i p i + 1 ) ( 90 , 90 ) 0 i f ( p i p j , p i p i + 1 ) { 90 , 90 } + e l s e
C 2 = i f ( p j p i , p j p j + 1 ) ( 90 , 90 ) 0 i f ( p j p i , p j p j + 1 ) { 90 , 90 } + e l s e
C 3 = i f ( p i p j , p i p i + 1 ) ( 0 , 180 ) 0 i f ( p i p j , p i p i + 1 ) { 0 , 180 } + e l s e
C 4 = i f ( p j p i , p j p j + 1 ) ( 0 , 180 ) 0 i f ( p j p i , p j p j + 1 ) { 0 , 180 } + e l s e .
For p i p i + 1 and p j p j + 1 , with p i = p j , we define, for reasons of continuity,
DC ( p i p i + 1 , p j p j + 1 ) = ( 0 0 0 0 ) .
The argumentation for this convention is given in [32].
So, in particular, when i = j , we have DC ( p i p i + 1 , p j p j + 1 ) = ( 0 0 0 0 ) .
We remark that the values C 1 and C 3 describe the location of the point p i + 1 or, equivalently, the orientation of the vector p i p i + 1 with respect to the cross at p i (formed by the lines L i j and P i j i ). We see that each of the four quadrants and four half lines determined by the cross at p i are determined by a unique combination of C 1 and C 3 values. Similarly, the values C 2 and C 4 describe the location of the point p j + 1 or, equivalently, the orientation of the vector p j p j + 1 with respect to the cross at p j (formed by the lines L i j and P i j j ).
The quadrants and half lines where C 1 , C 2 , C 3 and C 4 take different values are graphically illustrated in Figure 5. For example, the 4-tuple DC ( p i p i + 1 , p j p j + 1 ) for the vectors p i p i + 1 and p j p j + 1 , shown in Figure 4, is ( + ) .
Further on, we will sometimes use the notation DC ( p i p i + 1 , p j p j + 1 ) [ k ] to indicate C k , for k = 1 , 2 , 3 , 4 . Obviously, this notation does not hide the dependence on i and j.
Remark. 
Since C 1 , C 2 , C 3 and C 4 take values from the set { , 0 , + } , it may seem that there are 3 4 = 81 possible values for the tuples ( C 1 C 2 C 3 C 4 ) .
However, some combinations are not possible because of the assumption in Definition 1, that says that two consecutive vertices of a polyline have to be different. This means that C 1 and C 3 cannot be both 0 and that C 2 and C 4 cannot be both 0, in each case with the exception of C 1 , C 2 , C 3 and C 4 all being 0, that is ( C 1 C 2 C 3 C 4 ) = ( 0 0 0 0 ) . So, we have 81 8 8 = 65 possible values for ( C 1 C 2 C 3 C 4 ) .
This number of 65 possible values for the tuples ( C 1 C 2 C 3 C 4 ) can also be reached in another way. The point p i + 1 can be in one of four quadrants around p i or on one of four half lines starting in p i . These are 8 possible locations for p i + 1 . Similarly, we have 8 possible locations for p j + 1 in the quadrants and half lines starting in p j . This gives 8 × 8 = 64 possible combinations. Together with the case ( C 1 C 2 C 3 C 4 ) = ( 0 0 0 0 ) , we reach a total number of 65 possibilities.

2.2.2. The Double-Cross Matrix of a Polyline

Based on the definition of DC ( p i p i + 1 , p j p j + 1 ) , we now define the double-cross matrix of a polyline.
Definition 3. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline, with p i = ( x i , y i ) , for 0 i N , and with associated vectors p 0 p 1 , p 1 p 2 , , p N 1 p N . The double-cross matrix of P, denoted DCM ( P ) , is the N × N matrix with the entries DCM ( P ) [ i , j ] = DC ( p i p i + 1 , p j p j + 1 ) , for 0 i , j < N .
For example, the entries of the double-cross matrix of the polyline of Figure 6 are given in Table 1. This first example can be used to illustrate some properties of this matrix that are proven in Section 4. First, we observe that on the diagonal always ( 0 0 0 0 ) appears. We also see that there is a certain degree of symmetry along the diagonal. If DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 ) , then we have DC ( p j p j + 1 , p i p i + 1 ) = ( C 2 C 1 C 4 C 3 ) . These two observations imply that it suffices to know a double-cross matrix above its diagonal.

2.3. Double-Cross Similarity of Polylines

We now define double-cross similarity of two polylines of equal size.
Definition 4. 
Let P and Q be polylines of the same size. We say that P and Q are double-cross similar if DCM ( P ) = DCM ( Q ) .
We stress that Definition 4 requires that the two polylines have to be of the same size before we can speak of their double-cross similarity.
Figure 7 depicts two polylines, P and Q, which are double-cross similar. The entries of their double-cross matrices are given in Table 2. In polyline P of Figure 7, at each vertex, the polyline bends around 10 degrees to the left. In polyline Q, this is only around 2 degrees. Nevertheless, all relative positions of oriented line segments remain the same. As the most extreme example, if we compare p 0 p 1 and p 4 p 5 in both polylines, we see that p 4 p 5 almost makes a 90 left angle with the central line of the double cross in the polyline P, whereas, this is only some 10 in the polyline Q. Still, both P and Q have the same double-cross entry for p 0 p 1 and p 4 p 5 .

2.4. Examples of The Application of Double-Cross Similarity in GIS

In this section, we give some examples of the applicability of the concept of double-cross similarity of polylines in the area of GIS [27]. Polyline- (and polygon-) similarity testing, based on the double-cross formalism, can be applied in a number of areas in GIS. We think of applications in the area of image recognition or retrieval in the context of geographical maps. Features in maps are often modelled as polylines (for instance, roads and rivers) and polygons (for example, closed polylines can be used to delimit geographical areas such as cities, provinces and countries). Polyline-similarity testing can also be applied in the area of the classification of terrain features. In spatial database terms, it can be used for querying-by-sketch.
To determine the degree of similarity between two polylines (not necessary of the same size), according to the double-cross formalism, we can proceed by the following two steps: (1) first, if the polylines are of different size, their “generalized polylines” (that approximate the original polylines within an ε-error margin) are computed; (2) next, the differences between the double-cross matrices of the generalized polylines are used as a measure of dissimilarity between the given polylines. In the remainder of this section, we describe this process in some more detail and we give some examples of its application.
The definition of double-cross similarity of polylines (Definition 4) can only be applied to polylines of the same size. Obviously, in practice, we often want to test similarity of polylines that are not necessarily of the same size. To overcome this problem, we can use the notion of “generalized polylines”: for a given polyline P, a sequence of generalized polylines P 2 , P 4 , P 8 , , P 2 n , are defined that tend (as n grows) to converge to P (arbitrary close) and to consist of equally long line segments. To define the generalized polyline P 2 n in this sequence, we measure the distance along P from its start vertex to its end vertex and we mark on sem ( P ) the 2 n + 1 points that divide sem ( P ) in pieces of equal length. These markers are used as the 2 n + 1 vertices of the generalized polyline P 2 n . The first generalizations P 2 , P 4 and P 8 of a polyline P are shown in Figure 8. It can be shown that the generalized polylines of P converge to P, as n grows. This means that, for any error margin ε > 0 , a polyline P can be approximated, for sufficiently large n, by its generalizations P 2 n up to an error of ε. In particular, the difference in length between P and P 2 n is less than ε [27].
This means that for two given polylines P 1 and P 2 and an arbitrarily small ε, we can find a large enough n such that both P 1 2 n and P 2 2 n are ε-close to P 1 and P 2 . For step (2), we can use the double-cross matrices M 1 and M 2 of P 1 2 n and P 2 2 n to define the double-cross (dis)similarity of P 1 and P 2 . There are many possibilities here, but in the experiments in [27], the distance function Δ H is shown to be the most suitable in many applications in GIS. To define the distance measure Δ H ( M 1 , M 2 ) between double-cross matrices, we first construct, for both matrices M 1 and M 2 , 65-ary vectors γ ( M 1 ) , γ ( M 2 ) over the natural numbers, that count for each of the 65 realizable 4-tuples of −, + and 0 (Proposition 10 explains the number 65), the number of times they occur in the matrices M 1 and M 2 . Then, for matrices M 1 and M 2 of polylines of size N, we define
Δ H ( M 1 , M 2 ) : = 1 N 2 N i = 1 65 γ ( M 1 ) [ i ] γ ( M 2 ) [ i ] ,
where N = 2 n and where γ ( M 1 ) [ i ] refers to the ith entry of the vector γ ( M 1 ) . This means that the function Δ H counts the average difference between the 65 count values. In [27], it is shown that Δ H can be efficiently computed. It was also shown to perform well in a number of applications. We conclude this section, by discussing some of the performance results of Δ H .

Example 1: Polygon Similarity: Image Recognition and the Classification of Terrain Features

Figure 9 shows a map of Belgium in Figure 9a, consisting of 2047 line segments. Figure 9b is the same figure with the three bumps in the upper part moved to the right. Figure 9c is a map drawn by a geographer and Figure 9d is an abstract representation. Table 3 gives an overview of the similarity levels determined by Δ H . For example, the actual map of Belgium is 99% similar to the slightly modified map, but only 60% similar to the abstract representation.
This technique of Δ H -based polygon similarity can be used in areas such as image retrievel or recognition, but also in the recognition and classification of terrain features, of which the basic terrain features, modelled as polylines, are depicted in Figure 10.

Example 2: Query-by-Sketch

Figure 11 first shows sketches of a bowler hat and a bow tie. Next, it shows a map of the the province of Limburg in Belgium with its municipalities. The municipalities that resemble the bowler hat the most, according to Δ H , are shown in black on this map. The most alike city is “Lommel”, as indicated in the figure. The municipalities, resembling the bow tie sketch, are shown in grey.
For further implementation details and experimental results, we refer to [27].

3. An Algebraic Characterization of the Double-Cross Aatrix of a Polyline

In this section, we give an algebraic characterization of the entries in the double-cross matrix of a polyline. This algebraic characterisation can be used to effectively verify double-cross similarity of polylines.
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) , for 0 i N . Theorem 5 gives algebraic expressions to calculate the entries DC ( p i p i + 1 , p j p j + 1 ) of a double-cross matrix in terms of the x- and y-coordinates of the points p i , p i + 1 , p j and p j + 1 . Further on, we use this theorem extensively to prove properties of double-cross matrices.
Before stating and proving this theorem, we recall some elementary notations from algebra and some formula’s in the following remark.
Remark. 
The well-known formula to calculate the (counter-clockwise) angle θ between two vectors a and b in R 2 (and also, in general, in R n ) is
cos θ = a · b | a | · | b | .
Here, we are not talking about located vectors like before, but vectors in the common sense. In the above formula, the · in the numerator denotes the inner product (also called scalar product) of two vectors and | a | is the norm or length of a (and the · in the denominator is the product of real numbers).
The above formula implies that we have cos θ = 0 if and only if a · b = 0 if and only if θ { 90 , 90 } . So, a · b = 0 means that a is perpendicular to b . On the other hand, we have cos θ > 0 and thus a · b > 0 , when θ ( 90 , 90 ) . And finally a · b < 0 is equivalent to θ ( 90 , 270 ) .
If a = ( a , b ) R 2 , then a = ( b , a ) is the unique vector, perpendicular to a and of the same length of a , such that the (counter-clockwise) angle from a to a is 90 .
In the following theorem, we use the function
sign : R { , 0 , + } : x s i g n ( x ) = i f x < 0 ; 0 i f x = 0 ; a n d + i f x > 0 .
We also work with the following convention concerning signs: is +; 0 is 0; and + is −.
Theorem 5. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) , for 0 i N . Then, DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 ) with
C 1 = sign ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) ;
C 2 = sign ( ( x j x i ) · ( x j + 1 x j ) + ( y j y i ) · ( y j + 1 y j ) ) ;
C 3 = sign ( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) ; and
C 4 = sign ( ( x j x i ) · ( y j + 1 y j ) ( y j y i ) · ( x j + 1 x j ) ) .
Proof. 
We have p i = p j if and only if x j x i = 0 and y j y i = 0 and in this case the four (instantiated) polynomials in the statement of the theorem evaluate to zero.
Next, we assume p i p j . We consider the following vectors in R 2 :
  • u i j = ( x j x i , y j y i ) ;
  • u j i = ( x i x j , y i y j ) ;
  • v i = ( x i + 1 x i , y i + 1 y i ) ; and
  • v j = ( x j + 1 x j , y j + 1 y j ) .
We remark that u i j = u j i and that the vectors u i j , v i and v j (in the common sense of the word vector) are the (located) vectors p i p j , p i p i + 1 and p j p j + 1 translated to the origin of R 2 .
C 1 : Now, we apply the above cosine-formula to a = u i j and b = v i to obtain an expression for C 1 . Because C 1 is negative towards p j , we get the minus-sign in the following expression for C 1 :
C 1 = sign ( u i j · v i )
= sign ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) .
C 2 : Next, we apply the cosine-formula to a = u j i and b = v j to obtain an expression for C 2 . Again, because C 2 is negative towards p i , we get the minus-sign in C 2 = sign ( u j i · v j ) . This means that
C 2 = sign ( u i j · v j )
= sign ( ( x j x i ) · ( x j + 1 x j ) + ( y j y i ) · ( y j + 1 y j ) ) .
C 3 : Here, we apply the cosine-formula to a = u i j and b = v i and get C 3 = sign ( u i j · v i ) . We have a minus-sign here, because C 3 = in the direction of u i j . Since u i j = ( ( y j y i ) , x j x i ) , we get
C 3 = sign ( u i j · v i )
= sign ( ( y j y i ) · ( x i + 1 x i ) ( x j x i ) · ( y i + 1 y i ) ) .
C 4 : Finally, we apply the cosine-formula to a = u j i and b = v j . Since C 4 = in the direction of u j i , we have C 4 = sign ( u j i · v j ) . Since u j i = ( y j y i , ( x j x i ) ) , we get
C 4 = sign ( u j i · v j )
= sign ( ( y j y i ) · ( x j + 1 x j ) + ( x j x i ) · ( y i + 1 y i ) ) .
This concludes the proof.
In the following property, we show that the double-cross value ( 0 0 0 0 ) , which, for reasons of continuity, is the value in the case p i = p j (see Definition 2), can only occur in that exceptional case.
Proposition 6. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) . Then, DC ( p i p i + 1 , p j p j + 1 ) = ( 0 0 0 0 ) if and only if p i = p j .
Proof. 
As already observed in the proof of Theorem 5, p i = p j implies x j x i = 0 and y j y i = 0 and in this case the four (instantiated) polynomials of Theorem 5 evaluate to zero. This implies that DC ( p i p i + 1 , p j p j + 1 ) = ( 0 0 0 0 ) .
For the converse, we have to show that if the four polynomials evaluate to zero, then p i = p j . We prove this by assuming p i p j and deriving a contradiction. If p i p j , then x j x i 0 or y j y i 0 . First, we consider the case x j x i 0 .
As a first subcase, we consider the case y j y i = 0 . Then we get from the equations C 1 = 0 and C 3 = 0 that ( x j x i ) · ( x i + 1 x i ) = 0 and ( x j x i ) · ( y i + 1 y i ) = 0 . Since x j x i 0 is assumed, this implies that x i + 1 x i = 0 and y i + 1 y i = 0 . This contradicts the assumption in Definition 1, which says that no two consecutive vertices of a polyline are identical.
As a second subcase, we consider the case y j y i 0 . Then we get from C 1 = 0 that
x i + 1 x i = ( y j y i ) · ( y i + 1 y i ) x j x i
From C 3 = 0 , we get ( x j x i ) · ( y i + 1 y i ) = ( y j y i ) · ( x i + 1 x i ) .
Combined, these two equalities imply ( ( x j x i ) 2 + ( y j y i ) 2 ) · ( y i + 1 y i ) = 0 . Since in this case ( x j x i ) 2 + ( y j y i ) 2 > 0 , we conclude y i + 1 y i = 0 . But then, again using the equation for C 1 , we get ( x j x i ) · ( x i + 1 x i ) = 0 , or x i + 1 x i = 0 . So, we have both x i + 1 x i = 0 and y i + 1 y i = 0 , which again contradicts the assumption in Definition 1. We have contradiction in all cases and this concludes the proof of the first case. The case y j y i 0 has a completely analogous proof, now using C 2 = 0 and C 4 = 0 instead of C 1 = 0 and C 3 = 0 . This concludes the proof.
We end this section by remarking that all the factors appearing in the algebraic expressions, given by the theorem, that is x j x i , x i + 1 x i , y j y i , y i + 1 y i , x j + 1 x j and y j + 1 y j are differences in x-coordinate or differences in y-coordinate values.

4. Some Properties of Double-Cross Matrices that can be Derived from Their Algebraic Characterisation

In this section, we give some basic properties of double-cross matrices of polylines. In most cases, these properties can be derived from the algebraic characterization of the entries of a double-cross matrix, that we presented in previous section.

4.1. Symmetry in the Double-Cross Matrix of a Polyline

In Section 2.2.2, we have already announced by the example polyline given in Figure 6 with its double-cross matrix given in Table 1, that a double-cross matrix exhibits symmetry properties. We prove these properties in this section. The first property is by definition, the second needs some inspection of polynomials. The conclusion is that it is enough to know a double-cross matrix above its diagonal.
Proposition 7. 
If P = p 0 , p 1 , , p N is a polyline, then DC ( p i p i + 1 , p i p i + 1 ) = ( 0 0 0 0 ) , for 0 i < N .
The following property says how DC ( p j p j + 1 , p i p i + 1 ) can be derived from DC ( p i p i + 1 , p j p j + 1 ) in a straightforward way.
Proposition 8. 
Let P = p 0 , p 1 , , p N be a polyline. If DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 ) , then DC ( p j p j + 1 , p i p i + 1 ) = ( C 2 C 1 C 4 C 3 ) .
Proof. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) . We use the polynomials given in Theorem 5 to prove this result. Essentially, what we do is to interchange the role of i and j. If i = j , nothing has to be shown. So, we assume i j . If we interchange in
C 1 = sign ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) ;
C 2 = sign ( ( x j x i ) · ( x j + 1 x j ) + ( y j y i ) · ( y j + 1 y j ) ) ;
C 3 = sign ( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) ; and
C 4 = sign ( ( x j x i ) · ( y j + 1 y j ) ( y j y i ) · ( x j + 1 x j ) ) .
the role of i and j, we get DC ( p j p j + 1 , p i p i + 1 ) = ( C 1 C 2 C 3 C 4 ) , with
C 1 = sign ( ( x i x j ) · ( x j + 1 x j ) + ( y i y j ) · ( y j + 1 y j ) ) ;
C 2 = sign ( ( x i x j ) · ( x i + 1 x i ) + ( y i y j ) · ( y i + 1 y i ) ) ;
C 3 = sign ( ( x i x j ) · ( y j + 1 y j ) ( y i y j ) · ( x j + 1 x j ) ) ; and
C 4 = sign ( ( x i x j ) · ( y i + 1 y i ) ( y i y j ) · ( x i + 1 x i ) ) .
It is easy to see that C 1 = C 2 , C 2 = C 1 , C 3 = C 4 and C 4 = C 3 .
These two properties imply that only the N · ( N 1 ) 2 entries above the diagonal of the double-cross matrix of a polyline are significant.

4.2. The Double-Cross Value of Consecutive Line Segments

The following property says what the entries in the double-cross matrix of two successive line segments p i p i + 1 and p i + 1 p i + 2 in a polyline P = p 0 , p 1 , , p N look like. These values correspond to entries in the double-cross matrix just above (or below) its diagonal.
Proposition 9. 
Let P = p 0 , p 1 , , p N be a polyline. Of DC ( p i p i + 1 , p i + 1 p i + 2 ) the entries C 1 and C 3 are fixed to − and 0. That is,
DC ( p i p i + 1 , p i + 1 p i + 2 ) = ( C 2 0 C 4 ) ,
for any 0 i < N 1 .
Proof. 
Let 0 i < N 1 . We start with the entry DC ( p i p i + 1 , p i + 1 p i + 2 ) [ 1 ] = sign ( ( x i + 1 x i ) · ( x i + 1 x i ) + ( y i + 1 y i ) · ( y i + 1 y i ) ) = sign ( ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 ) . Because of the assumption in Definition 1, we have ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 > 0 and we can conclude that DC ( p i p i + 1 , p i + 1 p i + 2 ) [ 1 ] = .
For the third entry we have DC ( p i p i + 1 , p i + 1 p i + 2 ) [ 3 ] = sign ( ( x i + 1 x i ) · ( y i + 1 y i ) ( y i + 1 y i ) · ( x i + 1 x i ) ) = sign ( 0 ) = 0 . This concludes the proof.
The following property shows that more values depend on one another.
Proposition 10. 
Let P = p 0 , p 1 , , p N be a polyline and let 1 i < N 1 . If DC ( p i 1 p i , p i p i + 1 ) = ( C 2 0 C 4 ) , with C 2 = + or 0, then DC ( p i 1 p i , p i + 1 p i + 2 ) = ( C 2 C 4 C 4 ) , for some C 2 , C 4 { , 0 , + } .
Proof. 
Let DC ( p i 1 p i , p i + 1 p i + 2 ) be ( C 1 C 2 C 3 C 4 ) . We have the following expressions:
C 2 = sign ( ( x i x i 1 ) · ( x i + 1 x i ) + ( y i y i 1 ) · ( y i + 1 y i ) )
C 4 = sign ( ( x i x i 1 ) · ( y i + 1 y i ) ( y i y i 1 ) · ( x i + 1 x i ) )
C 1 = sign ( ( x i + 1 x i 1 ) · ( x i x i 1 ) + ( y i + 1 y i 1 ) · ( y i y i 1 ) )
C 3 = sign ( ( x i + 1 x i 1 ) · ( y i y i 1 ) ( y i + 1 y i 1 ) · ( x i x i 1 ) )
Let us abbreviate the first two expression as C 2 = sign ( c 2 ) and C 4 = sign ( c 4 ) and the latter two as C 1 = sign ( c 1 ) and C 3 = sign ( c 3 ) .
Then we have c 1 = ( ( x i + 1 x i ) + ( x i x i 1 ) ) · ( x i x i 1 ) + ( ( y i + 1 y i ) + ( y i y i 1 ) ) · ( y i y i 1 ) = ( x i x i 1 ) 2 + ( y i y i 1 ) 2 + c 2 . Since, by assumption c 2 0 , it follows from the assumption in Definition 1 that c 1 > 0 and thus C 1 = sign ( c 1 ) = .
Further, we have c 3 = ( ( x i + 1 x i ) + ( x i x i 1 ) ) · ( y i y i 1 ) ( ( y i + 1 y i ) + ( y i y i 1 ) ) · ( x i x i 1 ) = c 4 . So, C 3 = sign ( c 3 ) = sign ( c 4 ) = C 4 . This concludes the proof.

4.3. On the Length of Line Segments of a Polyline

The following properties shows that changing the length of segments in a polyline may or may not influence certain entries in its double-cross matrix.
Proposition 11. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) , for 0 i N . Changing the length of p i p i + 1 and p j p j + 1 does not influence the value of DC ( p i p i + 1 , p j p j + 1 ) .
Proof. 
If we take DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 ) and DC ( p i p i + 1 , p j p j + 1 ) = ( C 1 C 2 C 3 C 4 ) , where p i p i + 1 is p i p i + 1 scaled by a factor c, with c > 0 and p j p j + 1 is p j p j + 1 scaled by a factor d, with d > 0 , then we first observe that p i + 1 = ( x i + c · ( x i + 1 x i ) , y i + c · ( y i + 1 y i ) ) and p j + 1 = ( x j + c · ( x j + 1 x j ) , y j + c · ( y j + 1 y j ) ) . It is then easily verified that C 1 = sign ( c · c 1 ) = sign ( c 1 ) = C 1 , since c > 0 . Similarly, we get C 2 = sign ( d · c 2 ) = sign ( c 2 ) = C 2 , C 3 = sign ( c · c 3 ) = sign ( c 3 ) = C 3 and C 4 = sign ( d · c 4 ) = sign ( c 4 ) = C 4 , since also d > 0 . This concludes the proof.
The length of the last segment of a polyline does not influence the double-cross matrix. Only its direction matters. This follows straightforwardly from the definition.
Proposition 12. 
Let P = p 0 , p 1 , , p N be a polyline. Changing the length of p N 1 p N does not change DCM ( P ) .
For segments, that differ from the last, this is not the case, as the following property shows.
Proposition 13. 
Let P = p 0 , p 1 , , p N be a polyline. Changing the length of p i p i + 1 , for 0 i < N 1 , may change DCM ( P ) .
Proof. 
Consider the polylines P = p 0 , p 1 , p 2 , p 3 , p 4 and Q = q 0 , q 1 , q 2 , q 3 , q 4 of Figure 12. They only differ in the length of their third segment. For P, we have DC ( p 0 p 1 , p 3 p 4 ) = ( ) , whereas for Q, we have DC ( q 0 q 1 , q 3 q 4 ) = ( + + ) .

4.4. The Quadrant of Points of a Polyline

In Section 6, we will see that we can apply a similarity transformation to a polyline without changing its double-cross matrix. Without loss of generality, we may therefore assume that the first line segment of the polyline is the unit interval of the x-axis, that is, p 0 = ( 0 , 0 ) and p 1 = ( 1 , 0 ) .
The following property states that we can derive the quadrants in which all the other points are located from the double-cross matrix.
Proposition 14. 
Let P = p 0 , p 1 , , p N be a polyline and assume that p 0 = ( 0 , 0 ) and p 1 = ( 1 , 0 ) . Let p i = ( x i , y i ) for 2 i N . From DC ( p 0 p 1 , p i p i + 1 ) , we can determine sign ( x i ) and sign ( y i ) .
Proof. 
It is clear that C 1 = sign ( ( x i 0 ) · 1 + y i · 0 ) = sign ( x i ) and that C 3 = sign ( x i · 0 + y i · 1 ) = sign ( y i ) .

5. A Geometric Characterization of the Double-Cross Similarity of Two Polylines

In this section, we define the local carrier order of a polyline. This is a geometric concept and the main result of this section is a characterization of double-cross similarity of two polylines in terms of their local carrier orders.

5.1. The Local Carrier Order of a Polyline

Here, we give the definition of the local carrier order of a polyline. First, we introduce some notation for half-lines.
Definition 15. 
Let P = p 0 , p 1 , , p N be a polyline in R 2 and let 0 i < N . If p i p j , the (directed) half-line starting in p i through p j will be denoted by p i p j ¯ . The half-line, also starting in p i , but in the opposite direction is denoted p i p j ¯ . The half-lines p i p j ¯ and p i p j ¯ , for 0 j N with j i and p j p i , are called the carriers at p i .
The perpendicular half-line on p i p i + 1 ¯ starting in p i directing to the right of p i p i + 1 ¯ (that is, making a 90 clockwise angle with p i p i + 1 ¯ ) as p i ¯ r and the opposite perpendicular half-line starting in p i as p i ¯ . The half-lines p i ¯ r and p i ¯ are called the perpendiculars at p i .
For 0 i < N , the vertex p i has 2 N carriers and 2 perpendiculars. For an illustration of the half-lines of and of this single cross between p i and p i + 1 , we refer to Figure 13.
Now, we define the local carrier order of a vertex p i of a polyline P, for 0 i < N . This local carrier order consists of nine sets. One keeps track which p j ’s are equal to p i and the other eight are corresponding to eight directions of a compass card. We use the image of a 8-point compass with the northern cardinal direction in the direction of the half-line p i p i + 1 ¯ to name these sets.
In the following definition, we say that a half-line ¯ is strictly between the two perpendicular half-lines 1 ¯ and 2 ¯ , if they all have the same starting point and ¯ is in the quadrant determined by 1 ¯ and 2 ¯ (following the counter-clockwise direction).
Definition 16. 
Let P = p 0 , p 1 , , p N be a polyline in R 2 . For 0 i < N , we define the following nine sets for the vertex p i :
  • N ( p i ) is the set of p i p j ¯ equal to p i p i + 1 ¯ ;
  • NE ( p i ) is the set of p i p j ¯ strictly between p i p i + 1 ¯ and p i ¯ r ;
  • E ( p i ) is the set of p i p j ¯ equal to p i ¯ r ;
  • SE ( p i ) is the set of p i p j ¯ strictly between p i ¯ r and p i p i + 1 ¯ ;
  • S ( p i ) is the set of p i p j ¯ equal to p i p i + 1 ¯ ;
  • SW ( p i ) is the set of p i p j ¯ strictly between p i p i + 1 ¯ and p i ¯ ;
  • W ( p i ) is the set of p i p j ¯ equal to p i ¯ ; and
  • NW ( p i ) is the set of p i p j ¯ strictly between p i ¯ and p i p i + 1 ¯ ,
with 0 j < i or i < j < N . Finally, Eq ( p i ) is the set of p j that are equal to p i . The local carrier order of P in its vertex p i , for 0 i < N , denoted as LCO ( P , p i ) , is the list of sets
Eq ( p i ) , N ( p i ) , NE ( p i ) , E ( p i ) , SE ( p i ) , S ( p i ) , SW ( p i ) , W ( p i ) , NW ( p i )
and the local carrier order of P is the the list
LCO ( P , p 0 ) , LCO ( P , p 1 ) , . . . , LCO ( P , p N 1 ) .
We remark that if p j Eq ( p i ) , then the half-line p i p j ¯ makes no sense and therefore does not appear in any of the sets N ( p i ) , ..., NW ( p i ) .
As an illustration we use the polyline P depicted in Figure 14. Here, the local carrier orders in the vertices are given by:
  • LCO ( P , p 0 ) = { } , { p 0 p 1 ¯ } , { p 0 p 2 ¯ , p 0 p 4 ¯ } , { } , { } , { } , { } , { } , { p 0 p 3 ¯ }
  • LCO ( P , p 1 ) = { } , { p 1 p 2 ¯ } , { } , { } , { p 1 p 0 ¯ } , { } , { } , { } , { p 1 p 3 ¯ , p 1 p 4 ¯ }
  • LCO ( P , p 2 ) = { } , { p 2 p 3 ¯ } , { p 2 p 4 ¯ } , { } , { } , { } , { p 2 p 0 ¯ } , { } , { }
  • LCO ( P , p 3 ) = { } , { p 3 p 4 ¯ } , { p 3 p 2 ¯ } , { } , { p 3 p 1 ¯ , p 3 p 0 ¯ } , { } , { } , { } , { }
We now define the notion of local-carrier-order similarity of two polylines.
Definition 17. 
Let P = p 0 , p 1 , , p N and Q = q 0 , q 1 , , q N be polylines of equal size. We say that P and Q are local-carrier-order similar if LCO ( P , p i ) = LCO ( Q , q i ) for all i = 0 , 1 , . . . , N 1 , that is, if LCO ( P ) = LCO ( Q ) (always, modulo changing p i in q i ).

5.2. An Algebraic Characterization of the Local Carrier Order of a Polyline

In this section, we give algebraic conditions to express the local carrier order of a polyline. Hereto, it suffices to give for each vertex p i , with 0 i < N , in the polyline P = p 0 , p 1 , , p N characterizations of the sets in the list
Eq ( p i ) , N ( p i ) , NE ( p i ) , E ( p i ) , SE ( p i ) , S ( p i ) , SW ( p i ) , W ( p i ) , NW ( p i ) .
The following property gives this characterization. The proof of this property uses the same algebraic tools as the proof of Theorem 5 and we will skip the (straightforward) details.
We remark that, obviously, the algebraic characterisation of Eq ( p i ) is given by equalities on the coordinates.
Proposition 18. 
Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline and let p i = ( x i , y i ) , for 0 i N . For 0 j < i or i < j < N , the following table gives algebraic conditions for the halfline p i p j ¯ to belong to X ( p i ) with X { N , NE , E , SE , S , SW , W , NW } :
X = p i p j ¯ X ( p i ) is equivalent to
N ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) < 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) = 0
NE ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) < 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) < 0
E ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) = 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) < 0
SE ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) > 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) < 0
S ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) > 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) = 0
SW ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) > 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) > 0
W ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) = 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) > 0
NW ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) < 0
and
( ( x j x i ) · ( y i + 1 y i ) ( y j y i ) · ( x i + 1 x i ) ) > 0

5.3. A Characterization of Double-Cross Similarity of Polylines in Terms of Their Local Carrier Order

In this section, we give a geometric characterization of double-cross similarity of polylines in terms of their local carrier orders. The main theorem that we prove in this section is the following.
Theorem 19. 
Let P and Q be polylines of equal size. Then, P and Q are double-cross similar if and only if they are local-carrier-order similar. That is
DCM ( P ) = DCM ( Q ) i f   a n d   o n l y   i f LCO ( P ) = LCO ( Q ) .
The two directions of this theorem are proven in Lemma 20 and Lemma 22 (or their immediate Corollaries 21 and 23).
Lemma 20. 
Let P = p 0 , p 1 , , p N be a polyline. For i , j with 0 i j < N , we can derive the value of the 4-tuple DCM ( P ) [ i , j ] = ( C 1 C 2 C 3 C 4 ) from LCO ( P , p i ) and LCO ( P , p j ) .
Proof. 
Let P = p 0 , p 1 , , p N be a polyline of size N. If p j Eq ( p i ) , then DCM ( P ) [ i , j ] = ( 0 0 0 0 ) . This is in particular true if i = j .
If p j Eq ( p i ) , then the following twelve easily observable facts show how to determine C 1 , C 2 , C 3 and C 4 (for instance, by a detailed inspection of Figure 5).
C 1 is equivalent to
0 p i p j ¯ W ( p i ) E ( p i )
+ p i p j ¯ SE ( p i ) S ( p i ) SW ( p i )
p i p j ¯ NW ( p i ) N ( p i ) NE ( p i )
C 2 is equivalent to
0 p j p i ¯ W ( p j ) E ( p j )
+ p j p i ¯ SE ( p j ) S ( p j ) SW ( p j )
p i p j ¯ NW ( p i ) N ( p i ) NE ( p i )
C 3 is equivalent to
0 p i p j ¯ N ( p i ) S ( p i )
+ p i p j ¯ SW ( p i ) W ( p i ) NW ( p i )
p i p j ¯ NE ( p i ) E ( p i ) SE ( p i )
C 4 is equivalent to
0 p i p j ¯ N ( p j ) S ( p j )
+ p i p j ¯ SW ( p j ) W ( p j ) NW ( p j )
p i p j ¯ NE ( p i ) E ( p i ) SE ( p i )
This concludes the proof.
This lemma has the following immediate corollary.
Corollary 21. 
Let P be a polyline in R 2 . Then, the matrix DCM ( P ) can be reconstructed from the local carrier order LCO ( P ) .
Proof. 
Properties 7 and 8 show that it is sufficient to know a double-cross matrix of a polyline on and above its diagonal in order to complete it below its diagonal. And Lemma 20 shows how the local carrier order gives the double-cross matrix on and above its diagonal. This concludes the proof.
Now, we turn to the other implication of Theorem 19.
Lemma 22. 
Let P = p 0 , p 1 , , p N be a polyline in R 2 of size N. If 0 i < j < N , then DCM ( P ) [ i , j ] contains enough information to derived to which set of LCO ( P , p i ) the half-lines p i p j ¯ belong and to which set of LCO ( P , p j ) the half-lines p j p i ¯ belong.
Proof. 
Let DCM ( P ) [ i , j ] = ( C 1 C 2 C 3 C 4 ) , for 0 i < j < N . Again, the following facts are easily observable (for instance, by a detailed inspection of Figure 5).
C 1 C 3 is equivalent to C 2 C 4 is equivalent to
p i p j ¯ NE ( p i ) p j p i ¯ NE ( p j )
0 p i p j ¯ N ( p i ) 0 p j p i ¯ N ( p j )
+ p i p j ¯ NW ( p i ) + p j p i ¯ NW ( p j )
0 p i p j ¯ E ( p i ) 0 p j p i ¯ E ( p j )
00 p i = p i + 1 is excluded 00 p j = p j + 1 is excluded
0+ p i p j ¯ W ( p i ) 0+ p j p i ¯ W ( p j )
+ p i p j ¯ SE ( p i ) + p j p i ¯ SE ( p j )
+0 p i p j ¯ S ( p i ) +0 p j p i ¯ S ( p j )
++ p i p j ¯ SW ( p i ) ++ p j p i ¯ SW ( p j )
This concludes the proof.
This lemma has the following immediate corollary.
Corollary 23. 
Given DCM ( P ) , LCO ( P , p i ) can be constructed for all 0 i < N .
Combined, Corollaries 21 and 23 prove Theorem 19.

6. A Characterization of the Double-Cross Invariant Transformations of the Plane

In this section, we identify the transformations of the plane R 2 that leave the double-cross matrix of all polylines invariant. By a transformation we mean a continuous, bijective mapping of the plane R 2 onto itself.
If α : R 2 R 2 is a transformation and if p and q are points in R 2 , then we write α ( p q ) for α ( p ) α ( q ) .
What do we mean by applying a transformation of the plane to a polyline? The following definition says that we mean it to be the polyline formed by the transformed vertices.
Definition 24. 
Let α : R 2 R 2 be a transformation. Let P = ( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x N , y N ) be a polyline. We define α ( P ) to be the polyline α ( x 0 , y 0 ) , α ( x 1 , y 1 ) , , α ( x N , y N ) .
We remark that since a transformation α is a bijective function, the assumption in Definition 1, which says that no two consecutive vertices of a polyline are identical, will hold for α ( P ) if it holds for the polyline P.
We now define the notion of double-cross invariant transformation of the plane.
Definition 25. 
Let α : R 2 R 2 be a transformation. Let P be a polyline. We say that α leaves P invariant if P and α ( P ) are double-cross similar, that is, if DCM ( P ) = DCM ( α ( P ) ) .
We say that α is a double-cross invariant transformation if it leaves all polylines invariant. A group of transformations of R 2 is double-cross invariant if all its members are double-cross invariant transformations.
The main aim of this section is to prove the following theorem, which says that the largest group of transformations that is double-cross invariant consists of the translations, rotations and homotecies (or scalings) of the plane. The elements of this group are called the similarities of R 2 . We remark that a homotecy of the plane is a transformation of the form α c : R 2 R 2 : ( x , y ) c · ( x , y ) , where c R and c 0 .
Theorem 26. 
The largest group of transformations of R 2 , that is double-cross invariant consist of the similarity transformations of the plane onto itself, that is, transformations of the form
α : R 2 R 2 : x y c · a b b a · x y + d e ,
where a , b , c , d , e R , c 0 and a 2 + b 2 = 1 .
We remark that the condition a 2 + b 2 = 1 imply that the matrix is orthogonal and that a and b cannot be both zero. In fact, we can see a as cos φ and b as sin φ , where φ is the angle of the rotation expressed by the matrix.
We prove this theorem by proving three lemma’s. Lemma 27 proves soundness and Lemma 29 proves completeness. Lemma 28 is a purely technical lemma.
Lemma 27. 
The translations, rotations and homotecies of the plane (that is, the transformations given in Theorem 5) are double-cross invariant transformations.
Proof. 
We consider the three types of transformations separately, since we can apply them one after the other. In all cases, we use the algebraic characterization, given by Theorem 5.
1. Translations. We have already remarked that all the factors appearing in the algebraic expressions, given by given by Theorem 5, that is ( x j x i ) , ( x i + 1 x i ) , ( y j y i ) , ( y i + 1 y i ) , ( x j + 1 x j ) and ( y j + 1 y j ) are differences in x-coordinates or differences in y-coordinates. A translation τ ( d , e ) : R 2 R 2 : ( x , y ) ( x + d , y + e ) , therefore leaves these differences unaltered. For instance, ( x j x i ) is transformed to ( x j + d ( x i + d ) ) , which is, of course, the original value ( x j x i ) . None of the expressions given by Theorem 5 are therefore changed and the double-cross condition remain the same.
2. Rotations. Let
ρ ( a , b ) : R 2 R 2 : x y a b b a · x y ,
with a 2 + b 2 = 1 , be a rotation (that fixes the origin).
The expression for C 1 is transformed to
( a · ( x j x i ) b · ( y j y i ) ) · ( a · ( x i + 1 x i ) b · ( y i + 1 y i ) ) + ( b · ( x j x i ) + a · ( y j y i ) ) ) · ( b · ( x i + 1 x i ) + a · ( y i + 1 y i ) ) ,
which simplifies to ( a 2 + b 2 ) · ( ( x j x i ) · ( x i + 1 x i ) + ( y j y i ) · ( y i + 1 y i ) ) , which is the original polynomial since a 2 + b 2 = 1 . For C 2 , C 3 and C 4 , a similar straightforward computation shows that the polynomials remain the same.
3. Homotecies. A homotecy α c : R 2 R 2 : ( x , y ) c · ( x , y ) , transforms the differences ( x j x i ) , ( x i + 1 x i ) , ( y j y i ) , ( y i + 1 y i ) , ( x j + 1 x j ) and ( y j + 1 y j ) to ( c · x j c · x i ) , ( c · x i + 1 c · x i ) , ( c · y j c · y i ) , ( c · y i + 1 c · y i ) , ( c · x j + 1 c · x j ) and ( c · y j + 1 c · y j ) . This means that the polynomials given by Theorem 5 are multiplied by c 2 , which is strictly larger than zero, for c 0 . The signs of these polynomials are therefore unaltered. And the double-cross value of the scaled polyline is the same as the original one.
Before we can turn to completeness, we need the following technical lemma.
Lemma 28. 
Let f : R R : t f ( t ) be a strictly monotone increasing function. If
f ( s + t 2 ) = f ( s ) + f ( t ) 2
for any s , t R , then f ( t ) = ( f ( 1 ) f ( 0 ) ) · t + f ( 0 ) .
Proof. 
Suppose that f is a function as described and suppose that there is a t 0 R such that f ( t 0 ) ( f ( 1 ) f ( 0 ) ) · t 0 + f ( 0 ) . We remark that therefore t 0 cannot be 0 or 1.
If f ( t 0 ) = ( f ( 1 ) f ( 0 ) ) · ( t 0 ) + f ( 0 ) , then it follows from 2 · f ( 0 ) = 2 · f ( t 0 t 0 2 ) = f ( t 0 ) + f ( t 0 ) that also f ( t 0 ) = ( f ( 1 ) f ( 0 ) ) · t 0 + f ( 0 ) . We may therefore assume 0 < t 0 .
If f ( t 0 2 ) = ( f ( 1 ) f ( 0 ) ) · ( t 0 2 ) + f ( 0 ) , then it follows from 2 · f ( 0 + t 0 2 ) = f ( 0 ) + f ( t 0 ) that also f ( t 0 ) = ( f ( 1 ) f ( 0 ) ) · t 0 + f ( 0 ) . We may therefore assume 0 < t 0 < 1 .
Claim. 
For any n N and any k, with 0 k 2 n , we have that
f ( k 2 n ) = ( f ( 1 ) f ( 0 ) ) · k 2 n + f ( 0 ) .
We first prove this claim (by induction on n). For n = 0 , and k = 0 , 1 , we respectively have f ( 0 ) = ( f ( 1 ) f ( 0 ) ) · 0 + f ( 0 ) and f ( 1 ) = ( f ( 1 ) f ( 0 ) ) · 1 + f ( 0 ) .
Assume now that the claim is true for n. We prove it holds for n + 1 . We consider k 2 n + 1 and distinguish between the cases, 0 k 2 n and k = k + 2 n with 0 < k 2 n . If 0 k 2 n , then f ( k 2 n + 1 ) = f ( 1 2 ( 0 + k 2 n ) ) = 1 2 ( f ( 0 ) + f ( k 2 n ) ) , which by the induction hypothesis equals 1 2 ( f ( 0 ) + ( f ( 1 ) f ( 0 ) ) · k 2 n + f ( 0 ) ) or ( f ( 1 ) f ( 0 ) ) · k 2 n + 1 + f ( 0 ) .
If k = k + 2 n with 0 < k 2 n , then f ( 2 n + k 2 n + 1 ) = f ( 1 2 ( 1 + k 2 n ) ) = 1 2 ( f ( 1 ) + f ( k 2 n ) ) , which by the induction hypothesis equals 1 2 ( f ( 1 ) + ( f ( 1 ) f ( 0 ) ) · k 2 n + f ( 0 ) ) which equals 1 2 ( f ( 1 ) f ( 0 ) ) + ( f ( 1 ) f ( 0 ) ) · k 2 n + 1 + f ( 0 ) or ( f ( 1 ) f ( 0 ) ) · k + 2 n 2 n + 1 + f ( 0 ) which is ( f ( 1 ) f ( 0 ) ) · k 2 n + 1 + f ( 0 ) . This concludes the proof of the claim.
To conclude the proof, let 0 < t 0 < 1 and assume first that f ( t 0 ) > ( f ( 1 ) f ( 0 ) ) · t 0 + f ( 0 ) . This means that t 0 < f ( t 0 ) f ( 0 ) f ( 1 ) f ( 0 ) . We remark that since f is assumed to be strictly monotone, f ( 1 ) f ( 0 ) 0 and therefore the division is allowed. Choose k and n such that
k 2 n t 0 < k + 1 2 n < f ( t 0 ) f ( 0 ) f ( 1 ) f ( 0 ) ,
with 0 k 2 n . Then f ( k + 1 2 n ) = ( f ( 1 ) f ( 0 ) ) · k + 1 2 n + f ( 0 ) < f ( t 0 ) , although t 0 < k + 1 2 n , which contradicts the fact that f is strictly monotone increasing.
If we assume f ( t 0 ) < ( f ( 1 ) f ( 0 ) ) · t 0 + f ( 0 ) on the other hand, we have f ( t 0 ) f ( 0 ) f ( 1 ) f ( 0 ) < t 0 . Choose k and n such that
f ( t 0 ) f ( 0 ) f ( 1 ) f ( 0 ) < k 2 n < t 0 k + 1 2 n ,
with 0 k 2 n . Then f ( k 2 n ) = ( f ( 1 ) f ( 0 ) ) · k 2 n + f ( 0 ) > f ( t 0 ) , although k 2 n < t 0 , which contradicts the fact that f is strictly monotone increasing. In both cases, we obtain a contradiction and this concludes the proof.
The following lemma proves completeness. The proof is kept elementary and we remark that it shows how the double-cross formalism can be used to construct midpoints.
Lemma 29. 
The similarity transformations of the plane (given in Theorem 5) are the only double-cross invariant transformations.
Proof. 
Let α : R 2 R 2 be a double-cross invariant transformation.
(1) We consider polylines P = p 0 , p 1 , p 2 , where p 0 , p 1 and p 2 are collinear points with p 1 between p 0 and p 2 . By the assumption in Definition 1, p 1 should be strictly between p 0 and p 2 The only relevant entry in the double-cross matrix of this polyline is DC ( p 0 p 1 , p 1 p 2 ) which is ( + 0 0 ) . In α ( P ) , DC ( α ( p 0 p 1 ) , α ( p 1 p 2 ) ) should also be ( + 0 0 ) . This implies that α ( p 0 ) , α ( p 1 ) and α ( p 2 ) should also be collinear, with α ( p 1 ) (strictly) between α ( p 0 ) and α ( p 2 ) . This means that α preserves collinearity and betweenness, depicted in Figure 15.
(2) We consider polylines P = p 0 , p 1 , p 2 , where ( p 1 p 0 , p 1 p 2 ) = 90 , that is, the polyline takes a right turn at p 1 . The only relevant entry in the double-cross matrix of this polyline is again DC ( p 0 p 1 , p 1 p 2 ) which is now ( 0 0 ) . In α ( P ) , DC ( α ( p 0 p 1 ) , α ( p 1 p 2 ) ) should also be ( 0 0 ) . This means that α is a right-turn-preserving transformation. This is illustrated in Figure 16. Similarly, α is a left-turn-preserving transformation.
(3) We consider the polyline P = p 0 , p 1 , p 2 , p 3 , p 4 , p 5 , p 6 , p 7 , p 8 , p 9 , p 10 , p 11 , p 12 , with p 0 = p 4 = p 7 = p 12 ( 0 , 0 ) , p 1 = p 5 = ( 0 , 1 ) , p 2 = p 10 = ( 1 , 1 ) , p 3 = p 8 = ( 1 , 0 ) and p 6 = p 9 = ( 1 2 , 1 2 ) , depicted in Figure 17. This polyline forms a square with its two diagonals after making six 90 right-turns and two 90 left-turns . It is also closed in the sense that its start and end vertex are equal. The transformation α, which according to (2) preserves right and left turns, therefore has to map P again to a square with its diagonals. This means α is a square-preserving transformation. In particular, α preserves parallel line segments. Also, p 6 , which is the midpoint between p 0 and p 2 is mapped to α ( p 6 ) , which should be the midpoint between α ( p 0 ) and α ( p 2 ) . This means α is a midpoint-preserving transformation.
Suppose α ( 0 , 0 ) = ( a , b ) . If τ ( a , b ) is the translation ( x , y ) ( x a , y b ) , then τ ( a , b ) α ( 0 , 0 ) = ( 0 , 0 ) . Suppose τ ( a , b ) α ( 1 , 0 ) = ( c , d ) . Let ρ ( c , d ) be the rotation with ( 0 , 0 ) as center that brings ( c , d ) to the positive x-axis, that is, to ( c 2 + d 2 , 0 ) . We remark that ( c , d ) cannot be the origin since α is assumed to be a bijective function. So, also τ ( a , b ) α is bijective. Furthermore, let σ c 2 + d 2 be the scaling ( x , y ) 1 c 2 + d 2 ( x , y ) and let
β = σ c 2 + d 2 ρ ( c , d ) τ ( a , b ) α .
Then we have that β ( 0 , 0 ) = ( 0 , 0 ) and β ( 1 , 0 ) = ( 1 , 0 ) .
Since α is a double-cross invariant transformation, by assumption, and since σ c 2 + d 2 , ρ ( c , d ) and τ ( a , b ) are double-cross invariant transformations by Lemma 27, also β is a double-cross invariant transformation. And β also inherits from α the properties of preserving betweenness, collinearity, right- and left turns, squares, parallel line segments and midpoints. Because β preserves squares, we also have β ( 0 , 1 ) = ( 0 , 1 ) .
We now claim the following.
Claim. 
The transformation β is the identity.
The proof of this claim finishes the proof. Indeed, then we have
α = σ c 2 + d 2 1 ρ ( c , d ) 1 τ ( a , b ) 1 ,
which is of the required form.
Proof of the claim: First, we show that β is the identity on the x-axis and next we do the same for all lines perpendicular to the x-axis. Hereto, we consider the function
β x : R R : x β x ( x ) : = π x ( β ( x , 0 ) ) ,
where π x is the projection on the first component, that is, π x ( x , y ) : = x . Since β ( 0 , 0 ) = ( 0 , 0 ) and β ( 1 , 0 ) = ( 1 , 0 ) and β preserves collinearity, β maps the x-axis onto the x-axis and we have β x ( 0 ) = 0 and β x ( 1 ) = 1 . Furthermore, since β and hence β x preserves betweenness, β x is strictly monotone increasing. Indeed, let s , t R with s < t . With respect to 0 and 1, we can consider the twelve possible locations of s and t: s < t < 0 ; s < t = 0 < 1 ; s < 0 < t < 1 ; s < 0 < t = 1 ; s < 0 < 1 < t ; s = 0 < t < 1 ; s = 0 < t = 1 ; s = 0 < 1 < t ; 0 < s < t = 1 ; 0 < s < 1 < t ; 0 < s = 1 < t ; and 0 < 1 < s < t . In all cases, except s = 0 < t = 1 , we have three points. So, here we can use the fact that β preserves betweenness to show that β x ( s ) < β x ( t ) . In the case s = 0 < t = 1 , we have β x ( s ) = β x ( 0 ) = 0 < 1 = β x ( 1 ) = β x ( t ) . Finally, since β preserves midpoints, also for β x , we have
β x ( s + t 2 ) = β x ( s ) + β x ( t ) 2 ,
for all s , t R . All the conditions to apply Lemma 28 are therefore fulfilled. And we get β x ( x ) = ( β x ( 1 ) β x ( 0 ) ) · x + β x ( 0 ) = ( 1 0 ) · x + 0 = x .
Now, we fix some x 0 R and consider the function
β x 0 , y : R R : y β x 0 , y ( y ) : = π y ( β ( x 0 , y ) ) ,
where π y ( x , y ) : = y . Since β preserves parallel line segments, β x 0 , y maps the line with equation x = x 0 onto itself (since it maps the y-axis to itself). Since β also preserves the rectangle given by the polyline
P = ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 1 ) , ( x 0 , 1 ) , ( x 0 , 0 ) , ( 1 , 0 ) , ( 0 , 0 ) , ( 0 , 1 )
(for x 0 = 1 , we can omit ( x 0 , 1 ) and ( x 0 , 0 ) from the list) onto itself, we have again have β x 0 , y ( 0 ) = 0 and β x 0 , y ( 1 ) = 1 . The function β x 0 , y also inherits from β the property of preserving midpoints and is strictly monotonic increasing on the line x = x 0 . So, again we can apply Lemma 28 to show that β x 0 , y is the identity.
Since β ( x , y ) = ( β x ( x ) , β x , y ( y ) ) , we obtain that β is the identity transformation. This finishes the proof of the claim and also of the lemma.

7. Conclusions

We have studied the double-cross matrix descriptions of polylines in the two-dimensional plane from an algebraic and geometrical point of view. We have first given an algebraic characterization of the double-cross matrix of a polyline. This algebraic characterisation allowed us to prove some basic properties of double-cross matrices. We have given a geometric characterization of double-cross similarity of two polylines by means of the notion of the local carrier orders of polylines. Finally, we identify the transformations of the plane that leave the double-cross matrix of all polylines in the two-dimensional plane invariant.
Research on double-cross matrices gives rise to many questions of which we name a few here. Firstly, variants of double crosses can be imagined, for instance, to include the temporal dimension of moving object data. Another variant would be to rotate the double cross by 45 . This would make notions of straight ahead, back, right and left more relative. We can also envisage double crosses with a finer structure. They may have 8, 16 or more lines determining the “crosses”, as in the OPRA n -approach [29] (in some sense, the double-cross formalism can be seen as OPRA 2 ).
Our algebraic characterization of the double-cross matrix of a polyline, also raises the question of the realisability of double-cross matrices. This question adds up to the following: given an N × N matrix of 4-tuples over the set { , 0 , + } , decide if this is the double-cross matrix of a polyline. A matrix where ( 0 0 0 0 ) doesn’t appear on the diagonal, for instance, cannot be the double-cross matrix of a polyline. This decision problem leads to non-trivial problems in computational algebraic geometry.

Author Contributions

Both authors contributed equally.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Güting, R.H.; Schneider, M. Moving Objects Databases; Morgan Kaufmann: Burlington, MA, USA, 2005. [Google Scholar]
  2. Bookstein, F.L. Size and shape spaces for landmark data in two dimensions. Stat. Sci. 1986, 1, 181–242. [Google Scholar] [CrossRef]
  3. Dryden, I.; Mardia, K.V. Statistical Shape Analysis; Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
  4. Kent, J.T.; Mardia, K.V. Shape, procrustes tangent projections and bilateral symmetry. Biometrika 2001, 8, 469–485. [Google Scholar] [CrossRef]
  5. Mokhtarian, F.; Mackworth, A.K. A theory of multiscale, curvature-based shape representation for planar curves. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 789–805. [Google Scholar] [CrossRef]
  6. Gero, J.S. Representation and reasoning about shapes: cognitive and computational studies in visual reasoning in design. In Lecture Notes in Computer Science, Proceedings of the International Conference on Spatial Information Theory (COSIT 1999), Stade, Germany, 25–29 August 1999; pp. 315–330.
  7. Meathrel, R.C. A General Theory of Boundary-Based Qualitative Representation of 2D Shape. Ph.D. Thesis, University of Exeter, Exeter, UK, 2001. [Google Scholar]
  8. Schlieder, C. Geographic objects with indeterminate boundaries. In Qualitative Shape Representation; Taylor & Francis: Oxfordshire, UK, 1996; pp. 123–140. [Google Scholar]
  9. Gottfried, B. Tripartite line tracks qualitative curvature information. In Proceedings of the International Conference on Spatial Information Theory (COSIT 2003), Ittingen, Switzerland, 24–28 September 2003.
  10. Jungert, E. Symbolic spatial reasoning on object shapes for qualitative matching. In Proceedings of the International Conference on Spatial Information Theory (COSIT 1993), Marina, Italy, 19–22 September 1993.
  11. Kulik, L.; Egenhofer, M. Linearized terrain: Languages for silhouette representations. In Proceedings of the International Conference on Spatial Information Theory (COSIT 2003), Kartause Ittingen, Switzerland, 24–28 September 2003.
  12. Latecki, L.J.; Lakämper, R. Shape similarity measure based on correspondence of visual parts. IEEE Trans. Pattern Anal. Mach. Int. 2000, 22, 1185–1190. [Google Scholar] [CrossRef]
  13. Leyton, M. A process-grammar for shape. Artif. Int. 1988, 34, 213–247. [Google Scholar] [CrossRef]
  14. Giannotti, F.; Pedreschi, D. Mobility, Data Mining and Privacy; Springer: Berlin, Germany, 2009. [Google Scholar]
  15. Freksa, C. Using orientation information for qualitative spatial reasoning. In Lecture Notes in Computer Science, Proceedings of the Spatio-Temporal Reasoning (GIS 1992), Pisa, Italy, 21–23 September 1992; pp. 162–178.
  16. Cohn, A.G.; Renz, J. Qualitative spatial reasoning. In Handbook of Knowledge Representation; Elsevier: Philadelphia, PA, USA, 2007; pp. 551–596. [Google Scholar]
  17. Long, J.A.; Nelson, T.A. A review of quantitative methods for movement data. Int. J. Geogr. Inf. Sci. 2013, 27, 292–318. [Google Scholar] [CrossRef]
  18. Renz, J.; Nebel, B. Qualitative spatial reasoning using constraint calculi. In Handbook of Spatial Logics; Springer: Berlin, Germany, 2007; pp. 161–215. [Google Scholar]
  19. Kreutzmann, A.; Wolter, D. Qualitative spatial and temporal reasoning with AND/OR linear programming. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI 2014), Bethlehem, Czech, 18–22 August 2014.
  20. Wolter, D.; Kreutzmann, A. Analogical representation of RCC-8 for neighborhood-based qualitative spatial reasoning. In Proceedings of the 38th Annual German Conference on AI: Advances in Artificial Intelligence (KI 2015), Dresden, Germany, 21–25 September 2015.
  21. Wolter, D.; Kirsch, A. Leveraging qualitative reasoning to learning manipulation tasks. Robotics 2015, 4, 253–283. [Google Scholar] [CrossRef]
  22. Wallgrün, J.O.; Frommberger, L.; Wolter, D.; Dylla, F.; Freksa, C. Qualitative spatial representation and reasoning in the SparQ-toolbox. In Proceedings of the International Conference Spatial Cognition, Freiburg, Germany, 15–19 September 2008.
  23. Westphal, M.; Wölfl, S.; Gantner, Z. GQR: A fast solver for binary qualitative constraint networks. In Proceedings of the AAAI Spring Symposium: Benchmarking of Qualitative Spatial and Temporal Reasoning Systems, Stanford, CA, USA, 23–25 March 2009.
  24. Condotta, J.F.; Kaci, S.; Schwind, N. A framework for merging qualitative constraints networks. In Proceedings of the Twenty-First International Florida Artificial Intelligence Research Society Conference, Coconut Grove, FL, USA, 15–17 May 2008.
  25. Delafontaine, M.; Cohn, A.G.; Van de Weghe, N. Implementing a qualitative calculus to analyse moving point objects. Expert Syst. Appl. 1986, 38, 5187–5196. [Google Scholar] [CrossRef] [Green Version]
  26. Zimmermann, K.; Freksa, C. Qualitative spatial reasoning using orientation, distance, and path knowledge. Appl. Int. 1996, 6, 49–58. [Google Scholar] [CrossRef]
  27. Kuijpers, B.; Moelans, B. Towards a geometric interpretation of double-cross matrix-based similarity of polylines. In Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM GIS 2008), Irvine, CA, USA, 5–7 November 2008.
  28. Mavridis, N.; Bellotto, N.; Iliopoulos, K.; Van de Weghe, N. QTC3D: Extending the qualitative trajectory calculus to three dimensions. Inf. Sci. 2015, 322, 20–30. [Google Scholar] [CrossRef] [Green Version]
  29. Moratz, R.; Renz, J.; Wolter, D. Qualitative spatial reasoning about line segments. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI 2014), Bethlehem, Czech, 18–22 August 2014.
  30. Moratz, R.; Lücke, D.; Mossakowski, T. A condensed semantics for qualitative spatial reasoning about oriented straight line segments. Artif. Int. 2011, 175, 2099–2127. [Google Scholar] [CrossRef]
  31. Mathematica, Wolfram Research. Available online: //www.wolfram.com (accessed on 24 August 2016).
  32. Forbus, K.D. Qualitative physics: past present and future. In Readings in Qualitative Reasoning about Physical Systems; Morgan Kaufmann: Burlington, MA, USA, 1990; pp. 11–39. [Google Scholar]
Figure 1. An example of a polyline P 1 = ( 0 , 0 ) , ( 2 , 0 ) , ( 5 2 , 1 ) , ( 3 , 0 ) , ( 4 , 0 ) and a polyline P 2 = ( 0 , 0 ) , ( 1 , 0 ) , ( 2 , 0 ) , ( 5 2 , 1 ) , ( 3 , 0 ) , ( 4 , 0 ) . Although they have a different vertex set and a different size, still sem ( P 1 ) = sem ( P 2 ) .
Figure 1. An example of a polyline P 1 = ( 0 , 0 ) , ( 2 , 0 ) , ( 5 2 , 1 ) , ( 3 , 0 ) , ( 4 , 0 ) and a polyline P 2 = ( 0 , 0 ) , ( 1 , 0 ) , ( 2 , 0 ) , ( 5 2 , 1 ) , ( 3 , 0 ) , ( 4 , 0 ) . Although they have a different vertex set and a different size, still sem ( P 1 ) = sem ( P 2 ) .
Ijgi 05 00152 g001
Figure 2. An example of a polyline P = ( 0 , 0 ) , ( 0 , 1 ) , ( 2 , 1 ) , ( 3 , 0 ) , ( 4 , 2 ) , ( 1 , 0 ) , ( 4 , 0 ) and its semantics sem ( P ) . We see that two of the line segments of its semantics intersect in a point that is not a vertex. The last line segment of the polyline intersects two other line setments in a vertex.
Figure 2. An example of a polyline P = ( 0 , 0 ) , ( 0 , 1 ) , ( 2 , 1 ) , ( 3 , 0 ) , ( 4 , 2 ) , ( 1 , 0 ) , ( 4 , 0 ) and its semantics sem ( P ) . We see that two of the line segments of its semantics intersect in a point that is not a vertex. The last line segment of the polyline intersects two other line setments in a vertex.
Ijgi 05 00152 g002
Figure 3. The counter-clockwise angle ( p i p j , p i p k ) from p i p j to p i p k .
Figure 3. The counter-clockwise angle ( p i p j , p i p k ) from p i p j to p i p k .
Ijgi 05 00152 g003
Figure 4. The double cross (in blue): for p i p i + 1 and p j p j + 1 we have the lines L i j , P i j i and P i j j .
Figure 4. The double cross (in blue): for p i p i + 1 and p j p j + 1 we have the lines L i j , P i j i and P i j j .
Ijgi 05 00152 g004
Figure 5. The quadrants and half lines where C 1 , C 2 , C 3 and C 4 take different values.
Figure 5. The quadrants and half lines where C 1 , C 2 , C 3 and C 4 take different values.
Ijgi 05 00152 g005
Figure 6. An example of a polyline.
Figure 6. An example of a polyline.
Ijgi 05 00152 g006
Figure 7. The polylines P and Q are double-cross similar.
Figure 7. The polylines P and Q are double-cross similar.
Ijgi 05 00152 g007
Figure 8. An example of a polyline P = ( 0 , 1 ) , ( 0 , 0 ) , ( 1 , 0 ) , ( 1 , 1 ) (in black). Next, its first generalizations P 2 , P 4 and P 8 are shown in blue, in red and in green, respectively.
Figure 8. An example of a polyline P = ( 0 , 1 ) , ( 0 , 0 ) , ( 1 , 0 ) , ( 1 , 1 ) (in black). Next, its first generalizations P 2 , P 4 and P 8 are shown in blue, in red and in green, respectively.
Ijgi 05 00152 g008
Figure 9. A map of Belgium in (a) and some sketches in (bd).
Figure 9. A map of Belgium in (a) and some sketches in (bd).
Ijgi 05 00152 g009
Figure 10. The seven basic shapes of terrain features in (ag). They are modelled as polylines and named as shown in the figure.
Figure 10. The seven basic shapes of terrain features in (ag). They are modelled as polylines and named as shown in the figure.
Ijgi 05 00152 g010
Figure 11. Part (a) shows sketches of a bowler hat and bow tie. Part (b) shows the municipalities in the province of Limburg that resemble best these sketches in black, respectively, grey.
Figure 11. Part (a) shows sketches of a bowler hat and bow tie. Part (b) shows the municipalities in the province of Limburg that resemble best these sketches in black, respectively, grey.
Ijgi 05 00152 g011
Figure 12. Two polylines that differ in the length of their third segment.
Figure 12. Two polylines that differ in the length of their third segment.
Ijgi 05 00152 g012
Figure 13. An example the half-lines p i p j ¯ (in blue), p i p j ¯ (in green) and the two perpendicular half-lines p i ¯ r and p i ¯ (in red).
Figure 13. An example the half-lines p i p j ¯ (in blue), p i p j ¯ (in green) and the two perpendicular half-lines p i ¯ r and p i ¯ (in red).
Ijgi 05 00152 g013
Figure 14. A polyline P = p 0 , p 1 , p 2 , p 3 , p 4 with its carriers (in green) and its perpendiculars (in red).
Figure 14. A polyline P = p 0 , p 1 , p 2 , p 3 , p 4 with its carriers (in green) and its perpendiculars (in red).
Ijgi 05 00152 g014
Figure 15. A collinearity and betweenness-preserving transformation of the plane.
Figure 15. A collinearity and betweenness-preserving transformation of the plane.
Ijgi 05 00152 g015
Figure 16. A right-turn-preserving transformation of the plane.
Figure 16. A right-turn-preserving transformation of the plane.
Ijgi 05 00152 g016
Figure 17. A polyline that is a square with its two diagonals. The six 90 right-turns are indicated in bold.
Figure 17. A polyline that is a square with its two diagonals. The six 90 right-turns are indicated in bold.
Ijgi 05 00152 g017
Table 1. The entries of the double-cross matrix of the polyline of Figure 6.
Table 1. The entries of the double-cross matrix of the polyline of Figure 6.
p 0 p 1 p 1 p 2 p 2 p 3 p 3 p 4 p 4 p 5
p 0 p 1 ( 0 0 0 0 ) ( 0 + ) ( + + ) ( + ) ( + + )
p 1 p 2 ( + 0 ) ( 0 0 0 0 ) ( 0 + ) ( + + + ) ( + + )
p 2 p 3 ( + ) ( + 0 ) ( 0 0 0 0 ) ( + 0 ) ( + )
p 3 p 4 ( + ) ( + + + ) ( + 0 ) ( 0 0 0 0 ) ( 0 + )
p 4 p 5 ( + + ) ( + + ) ( + ) ( + 0 ) ( 0 0 0 0 )
Table 2. The entries of the double-cross matrix of the polylines of Figure 7.
Table 2. The entries of the double-cross matrix of the polylines of Figure 7.
p 0 p 1 p 1 p 2 p 2 p 3 p 3 p 4 p 4 p 5
p 0 p 1 ( 0 0 0 0 ) ( + 0 + ) ( + + + ) ( + + + ) ( + + + )
p 1 p 2 ( + + 0 ) ( 0 0 0 0 ) ( + 0 + ) ( + + + ) ( + + + )
p 2 p 3 ( + + + ) ( + + 0 ) ( 0 0 0 0 ) ( + 0 + ) ( + + + )
p 3 p 4 ( + + + ) ( + + + ) ( + + 0 ) ( 0 0 0 0 ) ( + 0 + )
p 4 p 5 ( + + + ) ( + + + ) ( + + + ) ( + + 0 ) ( 0 0 0 0 )
Table 3. Similarity between polygons of Figure 9 using Δ H .
Table 3. Similarity between polygons of Figure 9 using Δ H .
Figure(a)(b)(c)(d)
(a)100%99%87%66%
(b) 100%87%67%
(c) 100%80%
(d) 100%

Share and Cite

MDPI and ACS Style

Kuijpers, B.; Moelans, B. Algebraic and Geometric Characterizations of Double-Cross Matrices of Polylines. ISPRS Int. J. Geo-Inf. 2016, 5, 152. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5090152

AMA Style

Kuijpers B, Moelans B. Algebraic and Geometric Characterizations of Double-Cross Matrices of Polylines. ISPRS International Journal of Geo-Information. 2016; 5(9):152. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5090152

Chicago/Turabian Style

Kuijpers, Bart, and Bart Moelans. 2016. "Algebraic and Geometric Characterizations of Double-Cross Matrices of Polylines" ISPRS International Journal of Geo-Information 5, no. 9: 152. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5090152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop