Voxel-based extraction of individual pylons and wires from LIDAR point cloud data

Please download to get full document.

View again

of 8
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Similar Documents
Information Report



Views: 0 | Pages: 8

Extension: PDF | Download: 0

Extraction of individual pylons and wires is important for modelling of 3D objects in a power line corridor (PLC) map. However, the existing methods mostly classify points into distinct classes like pylons and wires, but hardly into individual pylons
  VOXEL-BASED EXTRACTION OF INDIVIDUAL PYLONS AND WIRES FROM LIDARPOINT CLOUD DATA N. Munir 1 ∗ , M. Awrangjeb 1 , B. Stantic 1 , G. Lu 2 , S. Islam 21 School of Information and Communication Technology, Griffith University, QLD 4111, Australia -@nosheen.munir@griffithuni.edu.au 2 School of Science, Engineering and Information Technology, Federation University, QLD 4000, Australia - guojun.lu@federation.edu.au Commission   VI,   WG   VI/  10 KEY   WORDS:   LiDAR,   Extraction,   Power   line   corridor,   wires,   SVM,   pylon ABSTRACT: Extraction   of    individual   pylons   and   wires   is   important   for   modelling   of    3D   objects   in   a   power   line   corridor   (PLC)   map.   However,   the   existing   methods   mostly   classify   points   into   distinct   classes   like   pylons   and   wires,   but   hardly   into   individual   pylons   or   wires.   The   proposed   method   extracts   standalone   pylons,   vegetation   and   wires   from   LiDAR   data.   The   extraction   of    individual   objects   is   needed   for   a   detailed   PLC   mapping.   The   proposed   approach   starts   off    with   the   separation   of    ground   and   non   ground   points.   The   non-ground   points   are   then   classified   into   vertical   (e.g.,   pylons   and   vegetation)   and   non-vertical   (e.g.,   wires)   object   points   using   the   vertical   profile   feature   (VPF)   through   the   binary   support   vector   machine   (SVM)   classifier.   Individual   pylons   and   vegetation   are   then   separated   using   their   shape   and   area   properties.   The   locations   of    pylons   are   further   used   to   extract   the   span   points   between   two   successive   pylons.   Finally,   span   points   are   voxelised   and   alignment   properties   of    wires   in   the   voxel   grid   is   used   to   extract   individual   wires   points.   The   results   are   evaluated   on   dataset   which   has   multiple   spans   with   bundled   wires   in   each   span.   The   evaluation   results   show   that   the   proposed   method   and   features   are   very   effective   for   extraction   of    individual   wires,   pylons   and   vegetation   with   99%correctness   and   98%   completeness. 1. INTRODUCTION In the past few years assessment and monitoring of power linecorridor (PLC) is gaining importance and has become an areaof active research. The conventional methods for inspection of electric network rely on the participation of ground personneland airborne camera to patrol power lines (PLs) and havelimitations such as need of intensive labour and low efficiency.Remote sensors such as optical images, synthetic aperture radar(SAR) images and airborne laser Sacanner (ALS) data for PLCmonitoring is part of an active research now a days. To date,optical and SAR images are commonly used to extract PLCobjects. But these sensors have limitations such as weatherdependency and occlusion. Airborne light detection andranging (LiDAR) has been proven a powerful tool to overcomethese limitations to enable more efficient inspection. A detailedsurvey on PLC monitoring methods is given in (Matikainen etal., 2016).Majority of the studies in literature has focused on extractionand classification of pylons, wires and vegetation points aswhole, less concentration has been given in extraction of individual object points which is very important for 3Dmodelling. Therefore, the goal of this paper is to focus onextraction of individual PLC key objects i.e., wires, pylons andtrees. 2. RELATED WORK In the literature, the PL object extraction methods basedon point clouds fall to two main classes: supervisedand unsupervised classification methods. The unsupervisedmethods generally employ statistical analysis and pattern ∗ Corresponding author recognition techniques like the Hough transform (HT) andRANdomSAmpleConsensus(RANSAC)toextractPLobjects.Axelsson (Axelsson, 1999) proposed a classification algorithmbased on the minimum descriptor length criterion using thelaser reflectance data and multiple echoes. The classificationresultswererefinedbyusing HT algorithm in the2Dgridspace.The iterative HT was applied on the grid data in (Melzer andBriese, 2004) to extract lines and clustered them together toget positions of pylons. Zhu and Hyypp¨a (Zhu, 2014) usedstatistical analysis of height, intensity and histogram analysisto detect PL points and performed a shape-based analysis toseparate PLs from other objects. A voxel-based piecewise linedetection technique was proposed by (Jwa et al., 2009) to groupPL points in fragments and to reconstruct them using catenarycurve equation. These unsupervised methods provide high levelof automation, but certain assumptions are based on geometricand radiometric characteristics of point cloud, therefore theclassification rules are hard to transfer from a point cloud toanother.As for the supervised classification, (McLaughlin, 2006) usedthe Guassian mixture model to extract transmission lines (TLs)from airborne LiDAR data. Sohn et al. (Sohn et al., 2012) usedthe Markov Random Field (MRF) classifier to separate objectswith linear and planner features. Then, the linear features notrepresenting wires were converted into 2D grid and the RandomForests (RF) classifier was applied to detect pylons. Finally,the RANSAC was applied on the PL candidates to form linesegments. Kim and Sohn (Kim and Sohn, 2011) computed21 features using voxel- and sphere-based neighbourhoods andused the RF to categorise data into five classes : grounds,vegetations, pylons, wires and buildings. Kim and Sohn (Kimand Sohn, 2010) used point-based and voxel-based features toclassify PL objects using RF classifier. Guo et al. (Guo et al.,2015) used the JointBoost classifier with geometric features to ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W8, 2019 14th 3D GeoInfo Conference 2019, 24–27 September 2019, SingaporeThis contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W8-91-2019 | © Authors 2019. CC BY 4.0 License. 91  classify PL scene and optimise the results using a graph-cutsegmentation method. Wang et al. (Wang et al., 2017) proposeda semi-automated PL classification method constructing PLCdirection using the HT with RANSAC algorithms and appliedthe Support Vector Machine (SVM) classifier to classify PLpoints using a slanted cylindrical neighbourhood.The supervised classification-based methods require largetrainingdatasetswhicharehardtocollectfordesiredresultsandan unbalanced sampling in the training set increases the rate of misclassification. The reason is that pylons and wires are minorclasses in any scene. For example, the dataset used in (Kim ,Sohn) had only 0.81% points for pylons. The use of unbalancedclasses of data gives inaccurate results. Moreover, most of the previous classification methods described the extracted PLobjects as whole classes and less attention has been given toextraction of individual wires, pylons and vegetation.The main objective of the proposed method is to extractstandalonepylons, vegetationandwiresfromLiDARdatausingsupervised and unsupervised methods. While the methods inliteratureweremorefocusedonclassificationofpointsaswholeclass and did not extracted the wires individually from completecorridor. The supervised method is employed initially by usingsupport vector machine (SVM) classifier to separate verticaland non vertical points by through vertical profile features(VPF). From the non ground points, individual pylons andvegetation are extracted using their shape and area properties.The PL is made up of multiple spans. The location of twosuccessive pylons are used to extract span points between them.The span points are further divided in to individual wire pointsby using the alignment properties of wires in a 3D voxel grid.The remainder of this paper is structured as follows. Section IIIdescribestheproposedmethodindetail. SectionIVpresentstheexperimental setup with results and discussions for evaluationof the proposed method. Finally, conclusion is presented with abrief on future research directions in Section V. 3. PROPOSED METHOD Figure 1 illustrates the work flow of the proposed method. Thesteps in red rectangle are for pylon and vegetation extractionwhile the steps in blue rectangle belong to wire extraction.First, the input points are divided into ground and non-groundpoints, while the later contains objects such as pylons, wiresand vegetation. The non-ground points are then classifiedinto vertical (e.g., pylons and vegetation) and non-vertical(e.g., wires) object points using the VPF through the binarySVM classifier. A binary mask is generated from the verticalobject points, from where individual pylons and vegetationare extracted by using their shape and area properties. Forextraction of wires, individual span points are first extractedbetween the pylon locations. Then, points in each span are usedto extract the individual wires by using their alignment propertyin voxels. Finally, the wire segments are concatenated to geteach individual wire in corridor. Figure 2(a) shows a samplescene of 517 m long from the test dataset and it will be used todescribe the steps of the proposed method. Figure 1. The work flow of the proposed method. 3.1 Extraction of Pylons and Vegetation Usually a height of 1 m (Awrangjeb et al., 2017) is added tothe local digital terrain model (DTM) height to separate thenon-ground points from the rest, which contains points from theground and the low height vegetation. Although the DTM canbe generated from the LiDAR data, we assume it is availablewith the data. If not available, it can be generated by usingsoftware like MARS Explorer (, n.d.). The non-ground points,asshowninFigure2(b), areexpectedtocontainonlytheobjectsof interest, such as pylons and wires in this case. However,the test dataset has a moderate high vegetation (trees). Pylonsand trees in the data show vertical continuity, while wires hangover the terrain surface and, thus, show vertical discontinuity.By exploiting this property, the VPF are extracted by using themethod described in (Kim , Sohn), where voxel grids of 5 m ×  5 m  ×  5 m are generated over the test area and each voxelis divided into fixed segments of 1 m height. The number of “continuous on segments”  C  n  and the number of “continuousoff segments”  C  f   are computed for each voxel. While wiresshow low value for  C  n  but high value for  C  f  , vegetation andpylons display the opposite. Thereafter, the supervised SVMclassifier is used to divide the LiDAR points into two sets:vertical and non-vertical points. Figures 2(c) and (d) illustratethe vertical and non-vertical points, respectively, for the samplescene.The vertical points include pylons and trees. For separationof these objects a binary mask   M  , shown in Figure 2(e), isgenerated using the method described in (Awrangjeb et al.,2017). The resolution of  M   is set fixed at 0 · 25 m (Awrangjeb etal., 2017) and all pixels are initially filled with 1 (white). Then,foreachnon-groundpointwithinapixel, aneighbourhood(e.g.,3  ×  3, consistent with the point density) is filled with 0 (black).Due to random and sparsity properties of the input LiDAR data, M   is flood filled to remove holes. In Figure 2(e) a magnifiedversion of a few objects are shown before and after the filling.A connected component analysis is carried out on the filledimage to obtain individual object boundaries. Assuming themaximum length or width of a pylon is 10 m, an area threshold T  a  = 100 m 2 (Awrangjeb et al., 2017) is applied to removetrees with large horizontal areas. For each surviving connectedcomponent, all the non-ground points  P  b  within its boundary ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W8, 2019 14th 3D GeoInfo Conference 2019, 24–27 September 2019, SingaporeThis contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W8-91-2019 | © Authors 2019. CC BY 4.0 License. 92  Figure 2. (a) A sample scene, (b) Non-ground points, (c) Vertical object points, (d) Non-vertical object points, (e) Binary mask, and (f)Height bins for points of a pylon and a tree. are taken and a 3D cuboid is formed based on the minimumand maximum Easting ( x ), Northing ( y ), and Height ( z  ) values,where  x  and  y  values are estimated from  P  b , but  z   values arefrom the dataset.These values are based on the fact that pylons reach themaximum heights of the wires attached to them, while treesusually do not reach these maximum heights. Therefore, for avegetation there is a huge gap towards the top of the cuboid, butfor a pylon there is no, or a little gap exists at the top (see Figure2(f)). The cuboid is then divided into bins  b n , where  n  ≥  1, inthe height direction. The value of   n  is empirically set to be 12based on the input point density. While for a smaller value of   n there may be many bins exist without any points from the top tobottom of a pylon, for a larger value of   n  there may be no binsexist without any points of a tree. The number of points in eachbin is counted using  b n , if there is at least 1 empty bin exists fora component, that component is marked as a tree, otherwise asa pylon. Finally, for each pylon its location is obtained as themean of its points. 3.2 Extraction of wires In this step the non-vertical points are further processed toobtain the individual wires. In PLC object extraction it is hardto extract wire points as individual wire points due to theirsame linear properties. Thus, the following facts of overheadPL structure are used: Wires do not intersect each otherbut maintain an adequate clearance to avoid unsafe contactthemselves and they are hanged above the ground level at acertain height and carry the same direction in a span (Beaty, Fink). By using these characteristics a voxel traversingalgorithm is developed to extract wire points into individualwires. Figure 1 shows the step for wire extraction. Figure 3shows the wires in a span with their labels to illustrate the wireextraction algorithm. The detail of each step is given below: 1. Span Extraction:  The transmission wires in a PLC is madeup of numbers of segments (spans). These spans are connectedto each other through pylons. Thus the extraction of individualpylons with their locations is helpful in determining thesespans. Once pylons with their locations have been identifiedthey are used to obtain the non-vertical points  P  s  for each span,i.e., points between two successive pylons in a corridor. Thespan extraction reduce the size of data points and makes thefurther processing of wire points easier. 2. Voxelization:  The space between two pylons is divided intoregular 3D grid. The 3D grid makes the processing complicatedbut unlike 2D grid it does not compromise on geometricalinformation of points in each grid cell (voxel). Each cuboid ina grid act as an individual voxel  V   . The size of   V    is kept to0 · 5 m  ×  0 · 25 m  ×  0 · 25 m as the minimum gap between two ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W8, 2019 14th 3D GeoInfo Conference 2019, 24–27 September 2019, SingaporeThis contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W8-91-2019 | © Authors 2019. CC BY 4.0 License. 93  Figure 3. Wires labelling. adjacent wires in the dataset is 0 · 5 m. The size of   V    is veryimportant and derived through analysis of the input LiDARpoints. This should be selected wisely as too small or big voxelsize can give an inaccurate result. All voxels in a span arenot occupied with points. The voxels between and outside thewire points are empty as all other objects are already removedearlier. Each  V    is given either of the two status by countingthe number of points in it: Empty voxel  V   E , if it does not haveany points or one point, and occupied voxel  V   O , if it has two ormore points. The location of each  V    can be indexed by usingthe values row ( i ), column (  j ) and height ( k ). The maximumvalue of   i ,  j  and  k  are calculated as: i  =   ( x max  − x min ) l    j  =   ( y max  − y min ) w   k  =   ( z  max  − z  min ) h   (1)where  x max ,  y max ,  z  max  = maximum coordinates of   x ,  y ,  z x min ,  y min ,  z  min  = minimum coordinates of   x , y ,  z l ,  w ,  h  = length, width, height of voxel i ,  j ,  k  = location of voxel 3. Level of wires in span:  The height (h) of points in eachvoxel with reference to ground is calculated and mean of heightis allocated to corresponding  V   O  to check that all wires at at thesame height or different. If all the wires have approximatelysame height (i.e., the difference of maximum height  h max  andminimum height  h min  is less then 1) the value of H will be setto 1 else set to 2. This value will be used in an algorithm tolabel wire points. The value of H can be calculated as : H   =   1  if h is approximately same 2  otherwise  (2)where  h  = minimum and maximum height difference in span H  = wire height levelsThe value of   H   increases if height level of wires in PLC urges.The minimum value of   H   is 1, i.e., all the wires are hanged at Figure 4. Feature extraction. the same height. For the given sample scene the value of   H   is 2(see Figure 3). 4. Horizontal count:  For each  V   O  two numbers  C  a 1  and  C  a 2 expressing the count of   V   E  voxels on its both sides across thespanarecalculated. Figure4showsthedirectionstocountthesetwo values. This counting stops when a V   O  voxel is found at thesame height or the algorithm reaches the last voxel in the samedirection. The value of   C  a 2  is high and the value of   C  a 1  islow for the right most (i.e.,  W  1  and  W  7 ) wires see in Figure 3),while the value of   C  a 1  is high and the value of   C  a 2  is low forthe left most wire (i.e.,  W  6  and  W  8  wires in a Figure. 3). 5. Vertical count:  For each  V   O  a height count  C  h  from theground level in terms of   V   E  voxels in the height direction iscalculated. Figure 4 shows the direction to count this value.The value of   C  h  will be high for top wires, e,g., wires  W  7  and W  8  have high  C  h  values, then wires  W  1  to  W  6  in Figure 3. 6. Voxel Labelling:  The  V   O  for which  C  a 1 ,  C  a 2  and  C  h  values ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W8, 2019 14th 3D GeoInfo Conference 2019, 24–27 September 2019, SingaporeThis contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W8-91-2019 | © Authors 2019. CC BY 4.0 License. 94  neither heigh nor low will be considered in the next iteration forlabelling. Thus, points in each voxels V   O  are labelled accordingto the values of   V   O  by calculating range ( R ) using an equationgiven below. R  = [ nl  + 1 ,nl  + 2 H  ] ,  (3)where  nl  = 0 in first iterationand  R  is allocated to  nl  in next iterationIn every iteration the voxels with very high and low values willbe labelled using a value of   R . The value of   C  a 1  is high and C  a 2  low for right most wire while the values are opposite forleft most wires. So, in every iteration the left most and rightmost wires will be labelled. The thresholds in algorithm hasbeen set representing the high and low values based on densityof LiDAR point cloud data.The voxel meeting the criteria will be labelled with values from R . The rest of the voxels will remain unlabelled and will beconsidered in the next iterations. This algorithm will proceediteratively until all the  V   O  are labelled. This algorithm isefficient as only  V   O  and its neighbouring voxels are consideredfor labelling. The huge amount of empty voxels  V   E  are onlycounted for C  a 1 , C  a 2  and C  h  . Finally, the voxels with the samelabels are combined together to get the same wire points. Thealgorithm will do the extraction of wires in each span and willextract the individual wire points. Thus the wires are extractedin the form of segments of wire points. Once the extractionis accomplished the complete wire from PLC is formed byconcatenated wire segments. 4. EXPERIMENTAL EVALUATION This section is divided into four main sections. The brief overview on a dataset has been provided in section 3.1. Section3.2 gives an overview on experiment set up. Section 3.3presents the results and discussion on accuracy assessment of the PLC key objects extraction. Finally, the comparison of proposed method with the previous methods. 4.1 DATASET Figure 5 shows the dataset from Maindample, Victoria,Australia. It is 5,560 m long and 330 m wide. The density of theinput point cloud is 23.7 points/m 2 . There are 2 transmissionline corridors (TLCs) and one distribution line corridor (DLC).Each of the two TLCs has 13 pylons, so 26 in total. The onlyDLC is under the two TLCs and does not have enough points,therefore, excluded in this study. The total number of points inthe dataset is 32,708,377, though the number of non-groundsis only 2,097,265 (16.5% of the total points). There are a total14 spans in each TLC, where each span has eight wires at twoheight levels (3  ×  2 + 2  ×  1). The dataset also comes witha DTM (Digital Terrain Model) which is a 3D representation( x , y , z  ) of the earth surface and help to convert the individualinput point height to the local ground height. 4.2 EXPERIMENT The proposed method is implemented in MATLAB R2018b onIntel(R) Core(TM) on Intel(R) Core(TM) i7 CPU @ 2.70GHzprocessor and 16 GB RAM. Table 1 shows the summary of theground truth used for evaluation of results. As the size of the Figure 5. (a) Dataset with ground points, (b) Datasetwithout ground points. dataset is very big so it is hard to collect ground truth for allpoints in the scene for evaluation. Only the first 6 spans of each of the two TLCs (total 12 spans out of 28) are used forevaluation of the proposed scheme. Points with height less than1 m are not included in the ground truth data, as these pointsare assumed as terrain points. The number of wires in Table 1is calculated by counting the number of wires in each span. 4.3 RESULTS AND DISCUSSIONS For performance evaluation, object-based and point-basedcompleteness  C  m , correctness  C  r  and quality  Q l  metrics areused and they are defined as follows (Wang et al., 2017): C  m  =  TP TP   + FN  ,C  r  =  TP TP   + FP  ,Q l  =  TP TP   + FP   + FN  , (4)Where TP is truly detected pylons and wires for object-basedevaluation and truly extracted pylon and wire points forpoint-based evaluation. FP is falsely detected wires and pylonsand FN is pylons and wires which are not detected by theproposed method. Figure 6(a) shows the extracted pylonlocations. These locations are used to find the wire pointswithin spans. Figure 6(b) and (c) shows the extracted pylonsandvegetationinTLC1andTLC2. Figure6(d)and(e)showthemagnified version of an extracted pylon and a tree in a dataset.Forobject-basedevaluation, thetotalnumberofdetectedpylonsand the total number of extracted wires in a dataset includingthe one in DLC are considered. Table 2 shows the object-basedevaluation results for the dataset. All pylons in TLCs aredetected except the ones which are located in the DLC. Allthe 224 wires in TLCs are extracted except the ones which areon the DLC. The object-based and point-based results are notprovided for vegetation due to the absence of the ground truthfor individual trees. Pylons WiresComp. Corr. Qual. Comp. Corr. Qual. 92.8 100 92 92.5 96 92.5 Table 2. Object-based evaluation on the whole dataset. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W8, 2019 14th 3D GeoInfo Conference 2019, 24–27 September 2019, SingaporeThis contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W8-91-2019 | © Authors 2019. CC BY 4.0 License. 95
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!