ABSTRACT

The possibilities of UAS aerial mapping results from what the author would argue is a technological development, stacking and fusing. Space photography, projection systems, GIS software and photogrammetry are indicative innovations that stack and create principles, methods, standards and applications of UAS mapping and geographical information. This stacking cumulates in the fusing of methods and components for UAS, which includes straightforward workflows such as ortho-mosaics, derived 3D models and photogrammetry. Drones operating in the ‘hover space’ envelope provide a unique resolution and aerial advantage between ground stations and light aircraft for mapping. Drone mapping is often termed ‘reality capture’ and is a ‘real’ digital surface for the testing of design ideas rooted in viability. The data fusing of survey maps with other data sources such as BIM models, national datasets, sub-surface information and arboreal information amongst many different layers provides a fully comprehensive indicator of elements of landscape and architectural production. These unique characteristics are a highly valuable and democratic mode of understanding place. Part of this value falls in the fusing of UAS mapping with other technologies such as First-Person View goggles (FPV), Augmented Reality (AR) and Virtual Reality (VR). With an evolving area of research from drone artificial intelligence (AI) computer vision machine learning for hazard detection and navigation (Guerra et al., 2019) and pattern recognition; image identification, image separation, object detection and person identification (Loquercio et al., 2019). The range of applications for construction and fabrication using various payloads mean that UAS both maps space and, as an ability to control elements of its construction which have given rise to new imaginaries, more-than-human programming of environments for digital functionality and participation (Gabrys, 2016).