Artículo patrocinado por Extraco, Misturas, Lógica, Enmacosa e Ingeniería InSitu, dentro del proyecto SITEGI, cofinanciado por el CDTI. (2012).
Article sponsored by Extraco, Misturas, Lógica, Enmacosa and Ingeniería Insitu inside the SITEGI project, cofinanced by the CDTI. (2012)
Autores / Authors : Aurora Cuartero, Julia Armesto, Pablo G. Rodríguez and Pedro Arias
As a result of the absence of standards, the accuracy specifications given by laser scanner producers in their publications and pamphlets are not comparable [16]. Experience shows that sometimes these should not be trusted. The instruments that are built in a small series vary from instrument to instrument and depend on the individual calibration and the care that has been taken in handling the instrument [17]. Furthermore, the terms error, accuracy, and precision are sometimes misused.
Continúa en: http://carreteras-laser-escaner.blogspot.com/2014/09/error-analysis-of-terrestrial-laser.html
Continued on: http://carreteras-laser-escaner.blogspot.com/2014/09/error-analysis-of-terrestrial-laser.html
Abstract:
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.
1. Introduction
In the last decade, terrestrial laser scanning (TLS) systems have appeared on the market and found a firm place in geodetic metrology. When TLS laser scanners were introduced on the market, their performances were rather poor, having in general a measurement uncertainty in the range of centimeters. However, with the progressive improvement of technology and the consequent increase in the measurement precision, the potential range of purposes has been widened from some meters to hundreds of meters in forensics [1], forestry [2], environment [3,4] geology [5], structure analysis [6,7], ship building [8] and archaeological applications [9,10]. A complete overview of the TLS technology and processing methods, as well as applications, is presented in Volsseman and Maas [11]. Further Lemmens [12] shows an updated description of different commercial instruments and their technical characteristics. If any metric data are obtained from the scanned data, the errors can be known. The need for calibration has been widely stated [13]. However, for active sensors, standards for error evaluation have not been established yet. With the publication of ISO standard 17123 part 8 (GNSS field measurement systems in Real Time Kinematic –RTK–) in September 2007, TLS are the only remaining geodetical measuring systems without standardised field test procedures. In accordance with the chair of ISO TC172/SC6 and with the support of Leica Geosystems AG Heerbrugg, Switzerland, basic ideas for simplified and full field test procedures for TLS have been worked out in a diploma thesis at the University of Applied Sciences Northwestern Switzerland [14]. Basically, the computed (experimental) standard deviations are compared on the basis of statistical tests. The most important results from the thesis are summarised by Gottwald [15]. The use of these proposals is under evaluation by the ISO Technical Committee (IS0 TC172/SC6).
As a result of the absence of standards, the accuracy specifications given by laser scanner producers in their publications and pamphlets are not comparable [16]. Experience shows that sometimes these should not be trusted. The instruments that are built in a small series vary from instrument to instrument and depend on the individual calibration and the care that has been taken in handling the instrument [17]. Furthermore, the terms error, accuracy, and precision are sometimes misused.
The first suggestion for system calibrations, system tests and accuracy checks for TLS correspond to Lichti [18,19]. Most of the published investigations are based on field or laboratory tests [20,16]. Some researchers had already published methods and results concerning accuracy tests with laser scanners [19,21,22].
Reshetyuk [23] estimated the position of the target centre from a number of points, then performed self-calibration of different scanners, and the rigid body transformation parameters between the scanner and external coordinate systems for all of the scans were estimated, as well as the calibration parameters, in a parametric least squares (LS) adjustment and the coordinate ―3D residuals‖. The ―technical‖ parameters representing the mechanical-optical stability such the geometry of the axes, eccentricity, and the addition constant were obtained for certain instruments [24]. For the accuracy of the distance measurement, truth and measured distance were compared, obtaining standard deviations.
Mechelke et al. [25] present an investigation into the accuracy behaviour through derived distances from point clouds of a 3D test field for accuracy evaluation of 3D laser scanning systems, accuracy tests of distance measurements in comparison to reference, accuracy tests of inclination compensation, the influence of the laser beams angle of incidence on 3D accuracy, investigations into scanning noise and investigations into the influence of object colour in distance measurements.
Kersten et al. [20] obtained the average and maximum deviation to the sphere and target centres (prior alignment) as well as the comparison of the distances determined in all combinations between reference points. Furthermore, the trunnion error and influence of the colour and material of the scanned surface were evaluated.
Lichti [26] presented the full mathematical model for a point-based photogrammetric approach to FARO LS880HE TLS self-calibration. Schneider [27] presents the calibration and analysis of a terrestrial laser scanner Riegl LMS-Z420i, showing the precision improvement of the adjusted observations as a result of a stepwise addition of calibration parameters.
The International Organization for Standardization (ISO) was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The ISO is headquartered in Geneva, Switzerland. The Unites States is represented by the ANSI.
In 1984, ISO published the 1st edition of the ―International vocabulary metrology _Basic and general concepts and associated terms (VIM)‖ [28]. International standards of ISO 5725-1 [29] present general principles and definitions about metrological concepts. It considers appropriate to review some terms:
Precision: degree of closeness between independent measurement results obtained in particular established conditions and depends on random factors only. The measure of precision is usually calculated as standard (root-mean-square) deviation of results of measurements performed in definite conditions. Precision depends only on the distribution of random errors and does not relate to the true value or the specified value. The measure of precision is usually expressed in terms of imprecision and computed as a standard deviation of the test results.
Accuracy: The closeness of agreement between a test result and the accepted reference value. The term accuracy, when applied to a set of test results, involves a combination of random components and a common systematic error or bias component.
Uncertainty, parameter associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand. The parameter may be, for example, a standard deviation, or the half-width of an interval having a stated
level of confidence.
The most common descriptor in geosciences is the root mean square error (RMSE). The frequently used mean error (ME) and error standard deviation (S) are also given in accuracy tests for a more complete statistical description of errors. However, none of these descriptive statistics (RMSE, ME, S) reports more than a global summary statistic based on comparison with a limited sample of points and only from the perspective of analysing the modulus (vertical and horizontal are not considered) of such errors. The two angles of error can be found through statistical analysis of spherical data. This approach to error evaluation has been used in the earth sciences [30], geology [31], biology [32], meteorology [33], palaeomagnetism [34], electronics [35] and biomechanics [36]. The statistical analysis of spherical data started with R.A. Fisher [37], who developed a distribution for angular errors on a sphere. N.I. Fisher [38] investigated various properties of the spherical median and discussed equivalents for the sign test. Later, the book [39] was devoted purely to the analysis of spherical data.
While several authors have contributed to providing accuracy evaluations of 3D laser scanning systems, 3D statistic analysis has been with available scanner data. In brief, the aim of this work is to present a novel proposal to analyse the positional accuracy in TLS data with a more complete analysis than currently available. Our proposal is characterised by some issues: the use of check points (CP) acquired by a technology with more accuracy (Proliner) and error analysis by means of spherical statistics.
2. Methodological Proposal for Error Analysis
In this proposal, an alternative way to analyse the error by means of spherical statistics is presented. The error of a control point ―i‖ is defined as the value ei = ci − cj, where ci is the coordinate point measured and cj the ―real‖ or ―true‖ coordinate, estimated by more precise methods.
The deviation between the true position (true data) and the corresponding point with TLS data is estimated as a vector. Each vector is defined by means of its modulus and two angles (colatitude and longitude—inclination and azimuthal), which allows us to analyse the errors in a 3D space or spherical coordinates, like the type of measured TLS data. Spherical coordinates, also called spherical polar coordinates, are a system of curvilinear coordinates that are natural for describing positions on a sphere (Figure 1).
Figure 1. TLS Intrinsic Coordinates System. |
The pairwise comparison of measured and reference coordinates allows the calculation of the MES, RMSE or similar statistics. RMSE is the square root of the average of the set of squared differences between the dataset coordinate values and coordinate values from an independent source of higher accuracy for identical points.
The statistical procedure proposed includes the basic statistic calculations (for modular and angular data) and the main tests for spherical distributions. The error analysis proposed in this paper consists of several parts. In the first part, the modular error component was analysed by linear statistics, similar to the conventional method. In the second part, the angular error components were analysed as well. In the third part, the most innovative part of the analysis, the graphical analysis was developed by 2D and 3D graphics with two packages of the R programming language. In the last part, a study of the uniformity and normality of the distribution error data was done to complete the data analysis.
2.1. Analysis Statistic of Modular Error
The error is a vector with three Cartesian components, one for each axis X, Y and Z, and denoted Δx, Δy and Δz, respectively. The modular error (Δm) is the magnitude equivalent to the square root of
the sum of the squares of the previous terms:
The basic statistics of the modulus (no angles are considered) are the sample mean, minimum and maximum values, standard deviation and root mean square error (RMSE). The sample mean (μ) is calculated by taking the sum of all the data values and dividing by the total number of data values. The sample mean is a measure of location, commonly called the average:
The range of errors is the difference between the largest (maximum) and the smallest (minimum) calculated error. It is a measure of the spread or the dispersion of the error observations.
There are several measures of dispersion, the most common being the standard deviation. These measures indicate to what degree the individual observations of a data set are dispersed or ‗spread out‘ around their mean.
The standard deviation (S or SD) is calculated by taking the square root of the variance. The sample variance is the sum of the squared deviations from their average divided by one less than the number of observations in the data set. It is a measure of the spread or dispersion of a set of data. In the measurements, high precision is associated with low dispersion:
The root mean square error (RMSE), or standard error, is the square root of the average of the set of squared differences between dataset coordinate values and coordinate values from an independent source of higher accuracy for identical points. RMSE is a good measure of accuracy:
The RMSE is an expression equivalent to the SD in the absence of bias (i.e., if = 0). It is important to calculate both magnitudes, SD and RMSE, because the first refers to the precision and the second to the accuracy.2.2. Analysis Statistic of Angular Errors Both a magnitude and a direction must be specified for a vector quantity, in contrast to a scalar quantity, which can be quantified with just a number. In the same way that a vector has three Cartesian components, it can also be decomposed into polar components: modular distance, vertical angle and horizontal angle. The modular component was analysed in the previous step (Section 2.1).
There are different conventions for considering the angles. In this work, the following convention is proposed because it is the most appropriate to TLS and similar to geographical nomenclature (Figure 1):
The vertical angle (θ) is the angle measured clockwise from the positive z-axis to a point with 0 ≤ θ ≤ π
The horizontal angle (φ) is the angle measured anticlockwise from the positive y-axis to point projected in the X-Y-plane with −π ≤ φ ≤ π
Angles are considered spherical data, so they are analysed as a point (vector) on a unit sphere [39]. By examining a sample of n spherical data (n1, n2,…, nn), or (θ1, φ1)…(θn, φn) (polar coordinates) with corresponding direction cosines, the resultant length of the data is R:
It can be observed that the following are the basic circular statistics for angles:
Mean directions (θ, φ): Calculation of vectorial addition of n spherical data gives a vector resultant (R). The directions of these vectors are the mean directions. These angles are the average direction of each angular component.
Mean module (R): The mean module can be obtained from the length of the vector resultant by:
Because we work with the unit vectors, is observed in the range (0, 1). Hence, if = 1, then it signifies that all spherical data are coincident. However, = 0 does not imply a uniform distribution.
Concentration parameter (κ): This parameter is a measure of the concentration of data in a preferred orientation or distribution. If κ = 0, the distribution is uniform, but if κ tends to ∞ the distribution will be concentrated at one point. The Fisher distribution on spherical data (3D) is equivalent to von Mises distribution on circular data (2D), and also equivalent to a normal distribution in linear data (1D).
Circular standard deviation (υ): This parameter for spherical data is similar to the S parameter for circular and linear data.
For more details [39]. 2.3. Analysis of Uniformity and Distribution Error Data Similar to the analysis of linear and circular data, while analysing spherical data, one should look for uniformity and normality (Fisher distribution) of distributions. The Fisher distribution is a symmetric unimodal distribution and can be considered as an analogue of the von Mises distribution in circular data; and normal distribution in linear data. For spherical data, two tests are used:
Rayleigh test: It is a uniformity test that detects a single modal direction in a sample of data. This test, developed by Lord Rayleigh in 1919, tests for uniformity against a unimodale alternate model, as assumed for the Fisher distribution. For n < 10, it compares the magnitude of the resultant vector, R, to a critical value. For n > 9, the test statistic, (3R)2/n, is tested with the chi-squared distribution test with three degrees of freedom at the 95% confidence level. La hypothesis of uniformity is rejected if this value is too large.
Beran/Giné test: In 1968, R. J. Beran devised a statistic, based on the angle between pairs of sample directions, for testing uniformity against alternate models that are not symmetric with respect to the centre of the sphere [40]. E.M. Giné, in 1975, extended Beran‘s work to the case where the data may be centro-symmetric [41]. The combined statistic, used for polar data, tests against both of these possibilities, by comparing the summed statistics to a critical value at the 95% confidence level.
Continúa en: http://carreteras-laser-escaner.blogspot.com/2014/09/error-analysis-of-terrestrial-laser.html
Continued on: http://carreteras-laser-escaner.blogspot.com/2014/09/error-analysis-of-terrestrial-laser.html
For more information or if you prefer this article in pdf-format, contact with us:
Or, send us an e-mail.