Point Clouds – the Representation of the Third Kind - 11/12/2019
Point clouds represent the world at its best: up to date, and with every detail there is to be known. Most articles pay particular attention to the data acquisition of such point clouds, as they are obtained from 1) Lidar time-stamped reflections with the advantage of indicating ‘empty’ space between the observation point(s) and surface points, and 2) images by structure from motion (SfM) and dense image matching (DIM) techniques with the benefit of colour-enriched points.
Nowadays they are not only obtained by high-end professional equipment, but also ever-more point clouds are ever-increasingly captured using low-cost (consumer) hardware such as smartphone cameras and (relatively) cheap Lidar systems developed for self-driving cars and indoor robotics. This development also bridges the gap from outdoor to indoor mapping. The indoor ‘terra incognita’ where we all spend more than 80% of our lives is yet to be captured and represented to its full extent by point clouds. In other words, the world as we know it – and which is thus its own best model – is being sensed increasingly appropriately and more often, more densely and with more (derived) attributes by point clouds.
But just because all these point clouds are available, it does not automatically mean they are used to their full potential. Instead, point clouds are still misused for deriving polyhedral 3D city models with a relatively low level of detail (LOD). The necessary steps to obtain a better LOD are not that easy, not least because the detail is in the point cloud itself. Deciding which points contribute to which polygon of the (hopefully) watertight polygonal mesh unlocks a new version of the well-known quote from Animal Farm (George Orwell): “All points are equal. But some points are more equal than others”. Only the end user can decide which points are really important – and that is impossible if the majority of the points are discarded (thus losing their connection) after being processed into 3D city models.
Moreover, this modelling step takes time and requires a lot of manual effort. Most buildings have some kind of architectural design and thus are not simply extruded blocks with an arbitrary roof shape. Therefore, fully automatic processing of LOD2 models that meets the user requirements seems to be so problematic that most digital-twin models are still partly ‘handmade’. Such 3D models are outdated as soon as they are published, and no one knows how well these city models represent reality because the link with their original point clouds is not maintained.
Other researchers opt for a voxel-based, volumetric, Minecraft-like representation. The main disadvantage is the need to set a fixed orientation and a fixed sampling rate of the building blocks. However, as they look unrealistic, people are less likely to regard them as the truth than polyhedral representations.
One statement always made in regard to handling huge point clouds is that they are… huge. Well yes, they are, which is why much can also be said in favour of such large ‘3D’ (which are often actually only 2.5D) city models. For example, one very big advantage of point clouds is that they are relatively simple: effectively merely a bunch of X,Y,Z coordinates with some attributes. Well-accepted file-based standards (LAS/LAZ) have proven their value for the dissemination of point clouds. Smart structuring and (fast) queries from point clouds maintained in a DBMS with the continuous level of detail of point clouds as the fourth dimension is an ongoing research activity with promising results.
So the handling of point clouds is important, but the key issue – as stated above – is their effective use for explorative visualization and analysis purposes. First of all, the ‘rich’ point cloud paradigm underlines the concept of dense 3D point clouds by enriching them with comprehensive geometric, radiometric and semantic properties. They become ‘smart’ when the point clouds themselves are aware of these properties. Visibility analysis for decision-making provides far more detailed and realistic results if it is based on point clouds, especially when vegetation has to be taken into account.
And let us not forget the capabilities of the human cortex. We humans are still very adept at detecting details in 3D scenes. Some outstanding points which might be lost in a 3D modelling process could be more important than all other points in the scene. Which parts of a building are built as designed, and which are not? This explorative use of point clouds is supported by analytical tools like point cloud-based change detection, but also by high-end 3D point cloud visualization tools on screen, and even better in a point cloud-based augmented reality environment.
Last but not least, why spend so much time, effort and money on collecting point clouds, regard them as input data, process them into a derivative and then discard them? If you think about it, it is kind of ridiculous. Instead, I believe point clouds should be considered as the third kind of representation, alongside polyhedral surface representations and volumetric voxel representations. But they can provide far more insight (as they are more or less the reality) through explorative visualization and analysis. So my advice is: use them – as is, and directly!
Edward Verbree is an assistant professor at Delft University of Technology, the Netherlands. His thanks for help and inspiration in writing this column go to Martijn Meijers, Peter van Oosterom and Mathias Lemmens, as well as many MSc students of geomatics and GIMA at TU Delft.
The original version of this column was published in the September/October 2019 issue of GIM International.Last updated: 11/12/2019